Re: Intent to implement and ship: Blocking FTP subresources

2018-04-09 Thread Patrick McManus
imo, you really need to add a pref to cover this (I'm not saying make it
opt-in, just preffable.). It will break something somewhere and at least
you can tell that poor person they can have compat back via config.

It also has a very small possibility of breaking enterprises or something
we would discover late, and we would want to be able to push a pref to fix
that.


On Mon, Apr 9, 2018 at 9:13 AM, Tom Schuster  wrote:

> Summary: All FTP subresources in HTTPs pages (this also includes blob:
> etc) will be blocked. Opening FTP links as toplevel documents is still
> possible.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1404744
>
> Platform coverage: All
> Target release: Firefox 61 (this already landed, but we forgot to send
> this, sorry!)
> Preference behind which this will be implemented: None
> Is this feature enabled by default in sandboxed iframes: Yes, enabled
> everywhere
> DevTools bug: None
> Do other browser engines implement this?
> Chrome shipped in M62?
> web-platform-tests: No
> Secure contexts: n/a
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: nsIURI implementations are now threadsafe

2018-03-26 Thread Patrick McManus
\o/ !!


On Friday, March 23, 2018, Valentin Gosu  wrote:

> Hello everyone,
>
> I would like to announce that with the landing of bug 1447194, all nsIURI
> implementations in Gecko are now threadsafe, as well as immutable. As a
> consequence, you no longer have to clone a URI when you pass it around, as
> it's guaranteed not to change, and now it's OK to release them off the main
> thread.
>
> If you need to change a nsIURI, you should use the nsIURIMutator interface
> (in JavaScript - just call .mutate() on the URI) or the NS_MutateURI
>  TestURIMutator.cpp#22>
> helper class (in C++).
>
> More info here:
> https://wiki.mozilla.org/Necko/nsIURI
>
> If you find any bugs, make them block bug 922464 (OMT-nsIURI)
>
> I'd like to thank everyone who helped review the patches, especially Honza
> Bambas who reviewed most of my patches.
>
> Cheers!
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: Short Nightly Shield Study involving DNS over HTTPs (DoH)

2018-03-19 Thread Patrick McManus
The objective here is a net improvement for privacy and integrity. It is
indeed a point of view with Nightly acting as an opinionated User Agent on
behalf of its users. IMO we can't be afraid of pursuing experiments that
help develop those ideas even when they move past traditional modes.
Traditional DNS is a swamp - ignoring that isn't doing our users any
favors. This is obviously not an engineering only driven effort.

Nightly is an explicitly experimental channel which is part of the reason
it is the choice for the first validation.

A question came up about geo based DNS and I've got a couple technical
comments about risk mitigation there:
 1] geo dns use is on the wane as TCP anycast approaches work much better
in practice
 2] the granularity of the CDN being used is much finer than the
granularity of most geoDNS resolution which tends to choose between very
big regions (O(~ 1/2 a continent)) so that should continue to work the same.

I initiated this thread on dev-platform because imo it is a reasonable
scope for nightly changes, especially ephemeral flip pref changes, and
that's why the FYI goes here. Its definitely not a secret. Messaging to a
larger user base than is impacted invites confusion. Future possible
changes impacting larger populations or putting things on trains would use
other, more broadly read communications channels.

-Patrick



On Mon, Mar 19, 2018 at 9:05 AM, Henri Sivonen  wrote:

> On Mon, Mar 19, 2018 at 10:07 AM, Daniel Stenberg 
> wrote:
> > On Sun, 18 Mar 2018, Eric Shepherd (Sheppy) wrote:
> >
> > I don't have such a far-reaching agreement with my ISP and its DNS.
>
> 1) Mozilla doesn't choose the ISP on users' behalf. (This is the key
> reason.)
> 2) The ISP sees the Host header in unencrypted traffic and SNI in
> encrypted traffic anyway. (This is a secondary reason.)
>
> > I don't
> > have such an agreement at all with 8.8.8.8 or other publicly provided DNS
> > operators.
>
> Using such resolvers is a decision that the user makes and not a
> decision that Mozilla *silently* makes on their behalf.
>
> > What other precautions or actions can we do to reduce the risk of this
> being
> > perceived as problematic?
>
> I suggested two ways on the bug.
>
> > Would reducing the test population really make it
> > much different?
>
> No.
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: Short Nightly Shield Study involving DNS over HTTPs (DoH)

2018-03-18 Thread Patrick McManus
Obviously, using a central resolver is the downside to this approach - but
its being explored because we believe that using the right resolver can be
a net win compared to the disastrous state of unsecured local DNS and
privacy and hijacking problems that go on there. Its just a swamp out there
(you can of course disable this from about:studies or just by setting your
local trr.mode pref to 0 - but this discussion is meaningfully about
defaults.)

And in this case the operating agreement with the dns provider is part of
making that right choice. For this test that means the operator will not
retain for themselves or sell/license/transfer to a third party any PII
(including ip addresses and other user identifiers) and will not combine
the data it gets from this project with any other data it might have. A
small amount of data necessary for troubleshooting the service  can be kept
at most 24 hrs but that data is limited to name, dns type, a timestamp, a
response code, and the CDN node that served it.



On Sun, Mar 18, 2018 at 11:51 PM, Dave Townsend <dtowns...@mozilla.com>
wrote:

> On Sat, Mar 17, 2018 at 3:51 AM Patrick McManus <pmcma...@mozilla.com>
> wrote:
>
>> DoH is an open standard and for this test we'll be using the DoH server
>> implementation at Cloudflare. As is typical for Mozilla, when we
>> default-interact with a third party service we have a legal agreement in
>> place to look out for the data retention/use/redistribution/etc interests
>> of both our users and Mozilla itself.
>>
>
> So my understanding of the study is that for those in the study branch
> (50% of Nightly users) we'll be sending every hostname they visit to
> Cloudflare. That sounds problematic to me. Can you give more details about
> the legal agreement?
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


FYI: Short Nightly Shield Study involving DNS over HTTPs (DoH)

2018-03-17 Thread Patrick McManus
Hi All, FYI:

Soon we'll be launching a nightly based pref-flip shield study to confirm
the feasibility of doing DNS over HTTPs (DoH). If all goes well the study
will launch Monday (and if not, probably the following Monday). It will run
<= 1 week. If you're running nightly and you want to see if you're in the
study check about:studies

Access to global DNS data is commonly manipulated and can easily be blocked
and/or collected. DNS services are also sometimes poorly provisioned
creating performance problems. We posit that integrity and confidentiality
protected access to well provisioned larger caches will help our users. In
a nutshell, that's what DoH does.

This work relies on a IETF specification that I hope will go into Last Call
this coming week: https://datatracker.ietf.org/doc/draft-ietf-doh-
dns-over-https/

This initial test is focused on performance feasibility assessment and we
won't actually be using the DNS data returned from the DoH server (i.e. the
traditional DNS service is used in parallel and only those answers are used
- the code calls this shadow mode.) This is obviously not the optimal
arrangement of things - the anticipated end state will involve running in
"first mode" where DoH is normally used and soft fails (either based on DNS
or TCP errors) to traditional DNS. There are also modes where DoH is used
and hard fails (known as "only mode" - it requires some bootstrap info),
and a mode where DoH and traditional race against each other using
whichever is faster. Their are acomodations in place to deal with
split-horizon DNS issues.

DoH is an open standard and for this test we'll be using the DoH server
implementation at Cloudflare. As is typical for Mozilla, when we
default-interact with a third party service we have a legal agreement in
place to look out for the data retention/use/redistribution/etc interests
of both our users and Mozilla itself.

The study launch bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1446404

Daniel Stenberg has written much of the code for this - he, I, and Valentin
Gosu are the team that will chase down any issues. Feel free to reach out
to us (or #necko on slack). There is currently one open issue related to
captive portals and "only mode" but that should not be triggered by the
study as "only mode" is not used.

-Patrick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-15 Thread Patrick McManus
Hi All -

Generally speaking releasing more information about what's behind the
firewall is an anti-goal. I have the same reaction others in this thread
have - this api is much more information than what is really needed, and
the information it provides is of questionable usefulness anyhow.

The design choice on the server often seems to want to know "is this data
metered or not" - which clearly has utility. There are many algorithms in
necko I want to apply the same criteria to. I'm still kind of queasy about
leaking this but if dan, or richard, or ekr who are all sufficiently
cynical about such things thought it was ok I would feel better about that
much.

But the performance variance of what 3g vs 4g vs wifi vs wired actually
means in any instance is so broad and has so much overlap that its simply
not a useful performance input. And as long as you're using constant
numbers from a table, there really is little you can do with certainty
about that other than maybe segregating 2g/bt from everything else.. even
that conclusion might be bogus.

Further, end to end bandwidth prediction simply does not exist with any
specificity - if it did the work of congestion controllers would be
un-necessary. Folks in this thread have talked about bridges, vpns, etc,
and that's just part of the story. The spec side steps that by assuming the
last mile is the bottleneck link, that the last mile is otherwise unused,
and assuming weirdly that multipath is a normal thing. That's handy for the
spec, but doesn't bear much on reality while it leaks local information.
Indeed it ignores the fundamental organization of IP networks as packet
switched connections of networks of varying types. (give me a POTS line I
hear you crying - but even that is likely faked circuit switching on voip
now).

>From an implementation pov, the browser could over time give a reasonable
metric about latency and bandwidth 'to the internet' just through passive
observation.. maybe as a 3x3 l/m/h matrix. but this would be for the client
in general and not really for the path between the client and the origin -
the latter is really what the origin wants. Without adding per-origin
overhead of a new speed test I would think the ResourceTiming already
available to it would be as good of a guide as anything else. So even
though it would be a cool engineering task to look at this whole-browser,
its of questionable utility imo.. and doing so leaks performance
observations cross-origin.

I guess the other thing I would want to consider here is the competitive
aspect of this API, but I wouldn't feel obligated to ship it for checklist
reasons.

tl;dr; is the metered-data bit enough and if so, can we just implement this
by always returning 1 of 2 configs (cell vs wifi e.g.) with const bw?

-P
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Converting assertions into release assertions

2016-09-22 Thread Patrick McManus
+1 on MOZ_DIAGNOSTIC_ASSERT - its been very useful to me as well.

On Thu, Sep 22, 2016 at 6:40 AM, Bobby Holley  wrote:

> There's also MOZ_DIAGNOSTIC_ASSERT, which is fatal in pre-release builds
> but not release ones. It can be a good compromise to find bugs in the wild
> when the performance cost is probably negligible but you're still not quite
> comfortable shipping it on release. I added it last year while working on
> stability for the media stack, and found it very useful.
>
> bholley
>
> On Wed, Sep 21, 2016 at 9:28 PM, Nicholas Nethercote <
> n.netherc...@gmail.com
> > wrote:
>
> > Greetings,
> >
> > Assertions, such as MOZ_ASSERT, are great. But they only run in debug
> > builds.
> >
> > Release assertions, such as MOZ_RELEASE_ASSERT, run in all builds.
> >
> > I want to highlight a nice case where converting a normal assertion
> > into a release assertion was a win. In bug 1159244 Michael Layzell did
> > this in nsTArray::ElementAt(), to implement a form of always-on array
> > bounds checking. See
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1159244#c55 for
> > discussion of how this is finding real bugs in the wild. (As well as
> > identifying new bugs, it's also helping understand existing crash
> > reports, e.g. see bug 1291082 where the crash signature changed.)
> >
> > Obviously we can't convert every normal assertion in the codebase into
> > a release assertion. But it might be worth thinking about which normal
> > assertions are good candidates for conversion. Good candidates include
> > any assertion where the consequence of failure is dangerous, e.g.
> > might cause memory access violations.
> >
> > Nick
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How useful is the WinSock LSP field in crash reports?

2016-07-25 Thread Patrick McManus
I do use it - but LSPs obviously cause the networking team more trouble
than other folks.

I have no objection if the UI just wants to move it to somewhere less
obtrusive though - as long as its there.

On Mon, Jul 25, 2016 at 6:29 PM, Aaron Klotz  wrote:

> On 7/25/2016 12:20 AM, Nicholas Nethercote wrote:
>
>> I suspect it is rare for this field to be useful. (I've never found it
>> useful.) It is also long, typically dozens of lines, and typically accounts
>> for a quarter or more of the space taken up by the fields in the "Details"
>> tab.
>>
>> I use it when evaluating potential additions to the DLL blocklist. If the
> proposed DLL is an LSP, we cannot block it. Typically the crash reports
> whose correlations made the case for blocking the DLL will also show
> whether or not that DLL is an LSP via that field.
>
> The additional GUID spew is also useful if we end up landing the LSP
> blocklist that I prototyped in bug 1238735.
>
> I propose removing it from the "Details" tab. It will still be visible in
>> the "Metadata" tab. Any objections? Am I missing any reason why it is
>> frequently useful?
>>
>> No objections. As long as it is available *somewhere* on crash-stats,
> that should be fine.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The Whiteboard Tag Amnesty

2016-06-08 Thread Patrick McManus
that's useful thanks. I think the word amnesty implied the death penalty
for existing whiteboard tags.

what it sounds like is you're just offering (for a limited time) to do
conversions on an opt-in basis? That's great.

-P


On Wed, Jun 8, 2016 at 3:11 PM, Emma Humphries  wrote:

>
>
> > On Jun 8, 2016, at 1:43 PM, Kartikaya Gupta  wrote:
> >
> > What happens after June 24? Is the whiteboard field going to be removed?
> >
>
> No, the whiteboard field remains, but any tags migrated will be deleted
> from existing values.
>
> If a tag is used across teams, I'll work out a resolution such as both
> teams using the new keyword, or specific keywords as long as they don't
> semantically copy other existing fields.
>
>
> >> On Wed, Jun 8, 2016 at 4:32 PM, Emma Humphries 
> wrote:
> >> tl;dr -- nominate whiteboard tags you want converted to keywords. Do it
> by
> >> 24 June 2016.
> >>
> >> We have a love-hate relationship with the whiteboard field in bugzilla.
> On
> >> one hand, we can add team-specific meta data to a bug. On the other
> hand,
> >> it's not a indexed field or a real tag system, making it hard to parse,
> >> search, and update.
> >>
> >> But creating keywords is a hassle since you have to request them.
> >>
> >> The long term solution is to turn whiteboard into proper tag system, but
> >> the Bugzilla Team's offering to help with some bulk conversion of
> >> whiteboard tags your teams use into keywords.
> >>
> >> To participate:
> >>
> >> 1. Create a Bug in the bugzilla.mozilla.org::Administration component
> for each
> >> whiteboard tag you want to convert.
> >>
> >> 2. The bug's description should have the old keyword, the new keyword
> you
> >> want to replace it with, and the description of this new keyword which
> will
> >> appear in the online help.
> >>
> >> 3. Make sure your keyword doesn't conflict with existing keywords, so be
> >> prepared to rename it. If your keyword is semantically similar to an
> >> existing keyword or other existing bugzilla field we'll talk you about a
> >> mass change to your bugs.
> >>
> >> 4. Make the parent bug,
> https://bugzilla.mozilla.org/show_bug.cgi?id=1279022,
> >> depend on your new bug.
> >>
> >> 5. CC Emma Humphries on the bug
> >>
> >> We will turn your whiteboard tag into a keyword and remove your old tag
> >> from the whiteboard tags, so make sure your dashboards and other tools
> that
> >> consume Bugzilla's API are updated to account for this.
> >>
> >> Please submit your whiteboard fields to convert by Friday 24 June 2016.
> >>
> >> Cheers,
> >>
> >> Emma Humphries
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The Whiteboard Tag Amnesty

2016-06-08 Thread Patrick McManus
as you note the whiteboard tags are permissionless. That's their killer
property. Keywords as you note are not, that's their critical weakness.

instead of fixing that situation in the "long term" can we please fix that
as a precondition of converting things? Mozilla doesn't need more
centralized systems. If they can't be 100% automated to be permissionless
(e.g. perhaps because they don't scale) then the new arrangement of things
is definitely worse.

I'll note that even for triage, our eventual system evolved rapidly and
putting an administrator in the middle to add and drop keywords and
indicies would have just slowed stuff down. Permissionless to me is a
requirement.


On Wed, Jun 8, 2016 at 2:43 PM, Kartikaya Gupta  wrote:

> What happens after June 24? Is the whiteboard field going to be removed?
>
> On Wed, Jun 8, 2016 at 4:32 PM, Emma Humphries  wrote:
> > tl;dr -- nominate whiteboard tags you want converted to keywords. Do it
> by
> > 24 June 2016.
> >
> > We have a love-hate relationship with the whiteboard field in bugzilla.
> On
> > one hand, we can add team-specific meta data to a bug. On the other hand,
> > it's not a indexed field or a real tag system, making it hard to parse,
> > search, and update.
> >
> > But creating keywords is a hassle since you have to request them.
> >
> > The long term solution is to turn whiteboard into proper tag system, but
> > the Bugzilla Team's offering to help with some bulk conversion of
> > whiteboard tags your teams use into keywords.
> >
> > To participate:
> >
> > 1. Create a Bug in the bugzilla.mozilla.org::Administration component
> for each
> > whiteboard tag you want to convert.
> >
> > 2. The bug's description should have the old keyword, the new keyword you
> > want to replace it with, and the description of this new keyword which
> will
> > appear in the online help.
> >
> > 3. Make sure your keyword doesn't conflict with existing keywords, so be
> > prepared to rename it. If your keyword is semantically similar to an
> > existing keyword or other existing bugzilla field we'll talk you about a
> > mass change to your bugs.
> >
> > 4. Make the parent bug,
> https://bugzilla.mozilla.org/show_bug.cgi?id=1279022,
> > depend on your new bug.
> >
> > 5. CC Emma Humphries on the bug
> >
> > We will turn your whiteboard tag into a keyword and remove your old tag
> > from the whiteboard tags, so make sure your dashboards and other tools
> that
> > consume Bugzilla's API are updated to account for this.
> >
> > Please submit your whiteboard fields to convert by Friday 24 June 2016.
> >
> > Cheers,
> >
> > Emma Humphries
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


On the merits of bringing back MSVC2015 to 48 to fix a top crasher

2016-05-19 Thread Patrick McManus
Hi All,

Tl;dr; The necko team has for months been chasing a windows only top
crasher. It is a shutdown hang - Bug 1158189. The crash stopped happening
on nightly-48 back in March and that ‘fixed state’ has been riding the
normal trains. Last weekend it returned to crashing on aurora-48 but not on
nightly-49. The data indicates that the toolchain changes are the reason.
We should talk about whether to put MSVC-2015 back on aurora-48 or to live
with the crashes for an extra 6 weeks.

This is a windows only bug and essentially boils down to non blocking
networking operations sometimes blocking (maybe forever) inside system
calls. It impacts a range of calls - send, recv, poll, connect, etc.. Often
LSPs and AV software is involved, but not always. Chrome has seen behavior
like this from time to time in the past, but anecdotally it is worse for
us. They aren’t sure if they have dealt with it since they changed to
msvc-2015.

On 46.0.1 this is the #18 top crasher (about 0.8% of crashes). On 47 this
is the #2 crasher (about 3.5% of crashes).  On 48 over the last 3 days it
is the #10 top crasher (1.25% of crashes), but is just in the noise for 48
when measured over the last few weeks as it just started recurring. It is
not a factor on 49.

We honestly don’t know if this is only a shutdown hang or not. It certainly
could be triggered by the shutdown path but just as easily this could be
happening during normal browsing and the user’s reaction would be to
shutdown the browser where networking (i.e. the socket thread) appears hung.

During the 48 cycle we hadn’t yet figured out a plan to attack it directly,
and while we were inserting diagnostics for it, we also cleaned up every
somewhat related issue we could find. When the hang disappeared from the
nightly crash stats we attributed it to a second order impact of a
different bugfix that landed at about the same time. Attempts to uplift
that fix to 47 did not help with crashes on 47 which we attributed to the
complex dependencies of the bug we uplifted (and eventually backed out of
47) - but it seems now the primary reason was that the toolchain on 47 was
different… as when the toolchain on 48 went back to msvc-2013 last Friday
the crashes returned on aurora 48. Version 49 (still msvc-2015) has not
seen a crash.

The last nightly crash was 20160324030447 - the msvc2015 patch landed 215
csets later on nightly-48. The crash was not seen again on 48 or 49 until
aurora-48 20160514004011 which had the reversion to msvc2013 just 31 csets
earlier. Nightly-49, which has only ever had msvc2015 as its compiler, has
not seen the crash.

I’m not sure how to compare the size of the populations impacted by the
crash vs the size of the population impacted by the SSE dependency. My
intuition says the no-SSE population is very small and we might be better
off overall with MSVC-2015 on the 48 channel.. We’re going to orphan that
population eventually anyhow but perhaps we want to live with the crashes
while we prep the infrastructure to deal with it as nathan mentions in a
different thread. I'm really torn.

Beyond the product tradeoffs, I am acutely aware that changing toolchains
is a real pain for everyone and going back and forth is kind of insane. I’m
sorry to even float the idea at this point - we hadn’t hypothesized that
the crash improved because of the change in msvc until it returned over the
weekend.


Thoughts?


-Patrick and Dragana

[This is a resend because filters hate me. My apologies if you receive it
twice.]
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to (sort of) unship SSLKEYLOGFILE logging

2016-04-26 Thread Patrick McManus
I don't think the case for making this change (even to release builds) has
been successfully made yet and the ability to debug and iterate on the
quality of the application network stack is hurt by it.

The Key Log - in release builds - is part of the debugging strategy and is
used fairly commonly in the network stack diagnostics. The first line of
defense is dev tools, the second is NSPR logging, and the third is
wireshark with a key log because sometimes what is logged is not what is
really happening on the 'wire' (thus the need to troubleshoot).

Bug reporters are often not developers and sometimes do not have the option
of (or willingness to) running other builds. Removing functionality that
helps with that is damaging to our strategic goal of building our Core and
emphasizing quality. Bug 1188657 suggests that this functionality is for
diagnosing tricky TLS bugs, but its just as helpful for diagnosing anything
using TLS which we of course hope to make be everything.

But of course if it represents a security hole then it is medicine that
needs to be swallowed - I wouldn't argue against that. That's why I say the
case hasn't been made yet.

The mechanism requires machine level control to enable - the same level of
control that can alter the firefox binary, or annotate the CA root key
store or any number of other well understood things. Daniel suggests that
Chrome will keep this functionality. The bug 1183318 handwaves around
social engineering attacks against this - but of course that's the same
vector for machine level control of those other attacks as well - I don't
see anything really improved by making this change, but our usability and
ability to iterate on quality are damaged. Maybe I'm mis understanding the
attack this change ameliorates?

Minimally we should be having this discussion about a change in
functionality for  Firefox 49 - not something that just moved up a
release-train channel.

Lastly, as a more strategic point I think reducing the tooling around HTTPS
serves to dis-incentivize HTTPS. Obviously, we don't want to do that.
Sometimes there are tradeoffs to be made, I'm skeptical of this one though.


On Tue, Apr 26, 2016 at 12:44 AM, Martin Thomson  wrote:

> In NSS, we have landed bug 1183318 [1], which I expect will be part of
> Firefox 48.
>
> This disables the use of the SSLKEYLOGFILE environment variable in
> optimized builds of NSS.  That means all released Firefox channels
> won't have this feature as it rides the trains.
>
> This feature is sometimes used to extract TLS keys for decrypting
> Wireshark traces [2].  The landing of this bug means that it will no
> longer be possible to log all your secret keys unless you have a debug
> build.
>
> This is a fairly specialized thing to want to do, and weighing
> benefits against risks in this case is an exercise in comparing very
> small numbers, which is hard.  I realize that this is very helpful for
> a select few people, but we decided to take the safe option in the
> absence of other information.
>
> (I almost forgot to send this, but then [3] reminded me in a very
> timely fashion.)
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1183318
> [2]
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
> [3]
> https://lists.mozilla.org/pipermail/dev-platform/2016-April/014573.html
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Help Needed : Firefox browser Configuration Data capture tool

2016-04-25 Thread Patrick McManus
You aren't clear on what level you want to capture the data. The gold
standard to see exactly what is communicated would be wireshark. When https
is used (hopefully all the time) it can automatically decode the traces if
you provide the key material -
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
can be used to obtain that for your own session.

hope that helps.

On Mon, Apr 25, 2016 at 2:07 AM, harshad wadkar 
wrote:

> Hello,
>
> I would like to view (capture) the data sent by a client web browser
> (firefox 45.0.2, ubuntu OS), to any visited website, when following
> about:config settings are done.
>
> I would like to view (capture) this data - with default setting as well as
> after changing the settings (user set)
>
> beacon.enabled
> browser.send_pings
> dom.event.clipboardevents.enabled
> media.peerconnection.enabled
> ...
> etc.
>
> Primary reason behind the question is - I would like to see what change in
> the data happens before and after the settings, when client browser sends
> data to any website.
>
> Can anyone suggest me any tool that will help me to capture such data. I
> have tried using HttpFox, Firebug but not got any success.
>
>
> Thanks and Regards,
> Harshad
>
> Wadkar Harshad Suryakant
> Res. No. : 020-26685821
> Cell No. : 09422517896
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Cancel your old Try pushes

2016-04-18 Thread Patrick McManus
Default should probably be fail push rather than auto cancel.. But +1 to
opting into parallel push explicitly. I've certainly used that on a few
occasions.

But the PSA here may be the most important part..
On Apr 15, 2016 3:37 PM, "Jonas Sicking"  wrote:

We could also make the default behavior be to cancel old pushes. And
then enable push message syntax for opting in to not cancelling.

/ Jonas

On Fri, Apr 15, 2016 at 10:19 AM, James Graham 
wrote:
> On 15/04/16 18:09, Tim Guan-tin Chien wrote:
>>
>> I wonder if there is any use cases to do multiple Try pushes of different
>> changesets but with the same bug number. Should we automatically cancel
>> the
>> old ones when there is a new one?
>
>
> Unfortunately there are legitimate uses for e.g. comparing the effects of
> two different changesets related to the same bug.
>
> On the other hand, without thinking too hard about the implementation
> details (which I am inclined to believe would be more complex than you
might
> expect due to missing APIs, auth, etc.), it seems like it might be
possible
> to extend |mach try| to prompt to cancel old pushes for the same bug.
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to stop revving UUIDs when changing XPIDL interfaces

2016-01-15 Thread Patrick McManus
On Fri, Jan 15, 2016 at 10:58 AM, Ehsan Akhgari 
wrote:

> Please let me know if you have any questions or concerns.



or cheers.

cheers!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dynamic Logging

2016-01-08 Thread Patrick McManus
On Fri, Jan 8, 2016 at 8:32 PM, Eric Rahm  wrote:

> Why is this so cool? Well now you don't need to restart your browser to
> enable logging [1]. You also don't have to set env vars to enable logging
> [2].
>


epic! thank you.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is the difference between asyncOpen2() and asyncOpen()

2015-12-07 Thread Patrick McManus
..you should be able to just use asyncopen2 - it will do security checks
for you that you may have needed to do outside asyncopen (e.g. csp) and
will reliably deal with things like redirects. :sicking or :ckerschb for
followups.

On Mon, Dec 7, 2015 at 6:00 PM, Philip Chee  wrote:

> I came across Bug 1182535 [tracking bug] change all callsites to
> asyncOpen2 instead of asyncOpen
>
> What's the difference and why should I switch?
>
> Can I just do |s/asyncOpen\(/asyncOpen2\(/g| or are there some
> subtleties I should be aware of?
>
> Phil
>
> --
> Philip Chee , 
> http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
> Guard us from the she-wolf and the wolf, and guard us from the thief,
> oh Night, and so be good for us to pass.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-05 Thread Patrick McManus
On Fri, Dec 4, 2015 at 10:56 PM, Eric Rescorla  wrote:

>
>
> Color me unconvinced. One of the major difficulties with consumer
> electronics devices
> that are nominally connectable to your computer is that the vendors do a
> bad job
> of making it possible for third party vendors to talk to them. Sometimes
> this is done
> intentionally in the name of lock-in and sometimes it's done
> unintentionally through
> laziness, but in either case it's bad.


and often lazy takes the form of abandonware - one of the successes of open
source has been in filling in that kind of gap. the open web should be able
to do it too.
  
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [stats] Counting HTTP/2 domain names

2015-09-10 Thread Patrick McManus
no - generally we don't do origin based telemetry for privacy reasons

On Wed, Sep 9, 2015 at 9:51 PM, Karl Dubost  wrote:

> Hi,
>
> Do we have a way to evaluate the number of domain names (not HTTP
> requests) which are communicating with Firefox using HTTP/2?
>
> Question triggered by the recent interesting post of Daniel
> http://daniel.haxx.se/blog/2015/09/07/http2-115-days-with-the-rfc/
>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implement Fetch?

2015-07-21 Thread Patrick McManus
On Tue, Jul 21, 2015 at 5:01 PM, Honza Bambas hbam...@mozilla.com wrote:

 The main offenders here are:
 - synchronous on-*-request global notifications



I believe this is mostly what :sicking refers to when he talks about
[1] https://etherpad.mozilla.org/BetterNeckoSecurityHooks
and I agree that would be useful work..

but casual readers of this thread shouldn't be too depressed - a bunch of
high data volume consumers do manage to take their data events off the main
thread just fine and has been pointed out I/O is definitely not taking
place there.

https://etherpad.mozilla.org/BetterNeckoSecurityHooks
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing PR_LOG levels

2015-05-22 Thread Patrick McManus
On Fri, May 22, 2015 at 4:11 PM, Eric Rescorla e...@rtfm.com wrote:

 I think it's generally valuable to have a trace level for all
 networking-type things.

 Having some separate mechanism seems like the more complicated thing.



+1 - I actually wasn't aware of this debug+1 mechanism and now that I am I
would like to make use of it.

requiring special builds is much much less satisfying, especially for
resolving interop problems reported in bugzilla.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Patrick McManus
On Fri, May 1, 2015 at 2:07 PM, scough...@cpeip.fsu.edu wrote:

 Why encrypt (and slow down) EVERYTHING


I think this is largely outdated thinking. You can do TLS fast, and with
low overhead. Even on the biggest and most latency sensitive sites in the
world. https://istlsfastyet.com


 when most web content isn't worth encrypting?


Fundamentally HTTPS protects the transport of the content - not the secrecy
of the content itself.

It is afterall likely stored in cleartext on each computer. This is an
important distinction no matter the nature of the content because  Firefox,
as the User's Agent, has a strong interest in the user seeing the content
she asked for and protecting her confidentiality (as best as is possible)
while doing the asking.Those are properties transport security gives you.
Sadly, both of those fundamental properties of transport are routinely
broken to the user's detriment, when http:// is used.

As Martin and Richard have noted, we have a strong approach with HSTS for
the migration of legacy markup onto https as long as the server is
appropriately provisioned - and doing that is much more feasible now than
it used to be. So sites that are deploying new features can make the
transition with a minimum of fuss.

For truly untouched and embedded legacy services I agree this is a harder
problem and compatibility needs to be considered a managed risk.

-P
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-15 Thread Patrick McManus
On Wed, Apr 15, 2015 at 10:03 AM, commodorej...@gmail.com wrote:

   rather than let webmasters make their own decisions.


I firmly disagree with your conclusion, but I think you have identified the
central property that is changing.

Traditionally transport security has been a unilateral decision of the
content provider. Consumers could take it or leave it as content providers
tried to guess what content was sensitive and what was not. They could
never really know, of course. The contents of a public library are not
private - but my reading history may or may not be. An indexed open source
repository is not private - but my searching for symbols involved in a
security bug may be. The content provider can't know apriori and even if
they do may not share the interests of the consumer. The decision is being
made by the wrong party.

The HTTPS web says that data consumers have the right to (at least
transport) confidentiality and data integrity all of the time, regardless
of the content. It is the act of consumption that needs to be protected as
we go through our day to day Internet lives. HTTPS is certainly not perfect
at doing this, but its the best thing we've got.

So yes, this is a consumer-first, rather than provider-first, policy.

-Patrick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Network 'jank' - get your blocking IO off of STS thread!

2015-04-03 Thread Patrick McManus
it sounds like overbite is using it as intended.

On Fri, Apr 3, 2015 at 2:19 PM, Cameron Kaiser ckai...@floodgap.com wrote:

 On 3/26/15 8:37 AM, Randell Jesup wrote:

 Can we stop exposing the socket transport service's nsIEventTarget outside
 of Necko?


 If we move media/mtransport to necko... or make an exception for it (and
 dom/network/UDPSocket and TCPSocket, etc).  Things that remove loaded
 footguns (or at least lock them down) are good.

 Glad the major real problem was too-similar-names (I'd never heard of
 STREAMTRANSPORTSERVICE (or if I had, it had been long-forgotten, or
 mis-read as SOCKETTRANSPORTSERVICE)).


 The OverbiteFF (gopher and legacy protocols) add-on uses
 nsISocketTransportService to open sockets, and I'm sure it's not the only
 one that does. The implementation is non-blocking, but I want to clarify
 from the above post that the intention is not to block non-Necko consumers
 from using it. Is this acceptable usage, or is it deprecated?

 Cameron Kaiser

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Network 'jank' - get your blocking IO off of STS thread!

2015-03-26 Thread Patrick McManus
media uses it by agreement and in an appropriate way to support rtcweb.

On Thu, Mar 26, 2015 at 10:20 AM, Kyle Huey m...@kylehuey.com wrote:

 Can we stop exposing the socket transport service's nsIEventTarget outside
 of Necko?

 - Kyle


 On Thu, Mar 26, 2015 at 8:14 AM, Patrick McManus mcma...@ducksong.com
 wrote:

 good catch.. looking at
 https://mxr.mozilla.org/mozilla-central/ident?i=NS_SOCKETTRANSPORTSERVICE_CONTRACTID
 the only uses I see other than the one Ehsan unearthed are expected.. so
 maybe that's the sum of the short term work.

 On Thu, Mar 26, 2015 at 10:06 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:

 On 2015-03-26 11:00 AM, Kyle Huey wrote:

 On Thu, Mar 26, 2015 at 7:49 AM, Patrick McManus mcma...@ducksong.com
 wrote:

  Is this thread mostly just confusion from these things sounding so much
 alike? Or am I confused now?


 Most likely.

 Does anyone have actual data to show that this is a problem?


 There's some truth to it.  Looks like some code uses the *socket*
 transport service when it probably means *stream* transport service.
 Example: http://mxr.mozilla.org/mozilla-central/source/dom/
 workers/ServiceWorkerEvents.cpp#249

 But other examples such as DOM Cache are not affected as far as I can
 tell.




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Network 'jank' - get your blocking IO off of STS thread!

2015-03-26 Thread Patrick McManus
:

 On Thu, Mar 26, 2015 at 2:46 AM, Randell Jesup rjesup.n...@jesup.org
 wrote:



 t.  (I even thought
 there was a separate SocketTransportService which was different from
 StreamTransportService.)


You're right they are different things.

The socket transport service is a single thread that does most of the low
level networking -
https://mxr.mozilla.org/mozilla-central/source/netwerk/base/nsSocketTransportService2.cpp#487
.. blocking this thread would be very bad.

and the stream transport service is a thread pool that is used for
buffering  management primarily, etc..
https://mxr.mozilla.org/mozilla-central/source/netwerk/base/nsStreamTransportService.cpp#485
..  I'm not sure how I feel about overloading arbitrary other functionality
here, but its certainly less damaging than blocking the single socket
thread.

Is this thread mostly just confusion from these things sounding so much
alike? Or am I confused now?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Network 'jank' - get your blocking IO off of STS thread!

2015-03-26 Thread Patrick McManus
thanks bkelly

On Thu, Mar 26, 2015 at 9:01 AM, Benjamin Kelly bke...@mozilla.com wrote:

 Actually, I'm going to steal bug 990804 and see if we can get something
 worked out now.  My plan is just to duplicate the STS code with a different
 XPCOM uuid for now.

 On Thu, Mar 26, 2015 at 9:29 AM, Benjamin Kelly bke...@mozilla.com
 wrote:

  On Thu, Mar 26, 2015 at 2:46 AM, Randell Jesup rjesup.n...@jesup.org
  wrote:
 
  Some examples pointed out to me: FilePicker, the spell-checker, the
  DeviceStorage DOM code, DOM cache code in Manager.cpp (via
  BodyStartWriteStream()), even perhaps ResolvedCallback in
  ServiceWorkers. (I haven't looked closely at all of the uses yet.)
 
 
  Sorry for this. Obviously there has been some confusion as I was
  explicitly directed towards STS during the DOM Cache development.  (I
 even
  thought there was a separate SocketTransportService which was different
  from StreamTransportService.)
 
  In any case, I wrote a bug to fix the Cache issue:
 
https://bugzilla.mozilla.org/show_bug.cgi?id=1147850
 
  I will try to fix this in the next couple weeks.
 
  Sorry again.
 
  Ben
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Network 'jank' - get your blocking IO off of STS thread!

2015-03-26 Thread Patrick McManus
good catch.. looking at
https://mxr.mozilla.org/mozilla-central/ident?i=NS_SOCKETTRANSPORTSERVICE_CONTRACTID
the only uses I see other than the one Ehsan unearthed are expected.. so
maybe that's the sum of the short term work.

On Thu, Mar 26, 2015 at 10:06 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
wrote:

 On 2015-03-26 11:00 AM, Kyle Huey wrote:

 On Thu, Mar 26, 2015 at 7:49 AM, Patrick McManus mcma...@ducksong.com
 wrote:

  Is this thread mostly just confusion from these things sounding so much
 alike? Or am I confused now?


 Most likely.

 Does anyone have actual data to show that this is a problem?


 There's some truth to it.  Looks like some code uses the *socket*
 transport service when it probably means *stream* transport service.
 Example: http://mxr.mozilla.org/mozilla-central/source/dom/
 workers/ServiceWorkerEvents.cpp#249

 But other examples such as DOM Cache are not affected as far as I can tell.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-11 Thread Patrick McManus
I have a slight twist in thinking to offer on the topic of persistent
permissions.. part of this falls to the level of spitballing so forgive the
imprecision:

Restricting persistent permissions is essentially about cache poisoning
attacks. The assumptions seem to be that
a] https is not vulnerable
b] every http transaction is as vulnerable as the last

Those are imperfect (which, granted, is not necessarily a reason to not
proceed - but read on for fun!).

wrt A: We know that this assumption around https is a little sketchy due to
the way the root store is commonly ...ummm.. localized. An enterprise
user allows a new trust anchor for use with their company proxy, during
which time they are by definition MITM'd by consent. I'm not especially
worried about that transaction - such is the nature of the consent. But
then they take that laptop home to a different context without that proxy.
The cached information, in this case a persistent permission, remains.
There is no reason to think the trust between those two environments should
overlap. The HTTP cache has fundamentally the same problem (think about a
ubiquitous resource like ga.js) as the persistent permission.

wrt B: If a user on a home broadband connection conducts a transaction over
plaintext she certainly is exposed to a MITM attack. But repeating that
operation from the same location only adds small marginal risk (i.e. the
risk of the path changing or the actors on that path changing - this can
happen but often does not). OTOH if she moves to her neighbor's wifi or
roams to 4g then its a whole new ballgame. The uri scheme isn't a good
indicator of risk for each click.

Daniel Stenberg has some code that tries to establish an internal
what-network-am-i-on ID. Think of a more fully implemented version of it as
a hash of your network interfaces and MACs, your router's MAC, etc.. Its
currently just used as part of link-change-detection.. but it could make a
pretty interesting part of a cache key for things we are worried about
being poisoned - the result here would be scoping of persistent permissions
to the topology that you accepted them on.


On Fri, Mar 6, 2015 at 12:27 PM, Anne van Kesteren ann...@annevk.nl wrote:

 A large number of permissions we currently allow users to store
 persistently for a given origin. I suggest we stop offering that
 functionality when there's no lock in the address bar. This will make
 it harder for a network attacker to abuse these permissions. This
 would affect UX for:

 * Geolocation
 * Notification
 * Fullscreen
 * Pointer Lock
 * Popups

 If you are interested in demos of how these function today:

 * http://dontcallmedom.github.io/web-permissions-req/tests/geo-get.html
 *
 http://dontcallmedom.github.io/web-permissions-req/tests/notification.html
 * http://dontcallmedom.github.io/web-permissions-req/tests/fullscreen.html
 *
 http://dontcallmedom.github.io/web-permissions-req/tests/pointerlock.html
 * http://dontcallmedom.github.io/web-permissions-req/tests/popup.html

 Note that we have already implemented this for getUserMedia(). You can
 contrast the UX for these two links:

 *
 http://dontcallmedom.github.io/web-permissions-req/tests/gum-audiovideo.html
 *
 https://dontcallmedom.github.io/web-permissions-req/tests/gum-audiovideo.html

 This seems like a change we can make today that would be better for
 our users and nudge those that require persistence to do the right
 thing, without causing much harm.


 --
 https://annevankesteren.nl/
 ___
 firefox-dev mailing list
 firefox-...@mozilla.org
 https://mail.mozilla.org/listinfo/firefox-dev

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS

2014-11-25 Thread Patrick McManus
Hi Anne,

On Tue, Nov 25, 2014 at 9:13 AM, Anne van Kesteren ann...@annevk.nl wrote:


  They are doing this with opportunistic encryption (via the
  Alternate-Protocol response header) for http:// over QUIC from chrome.
 In
 



 Or are you saying that
 because Google experiments with OE in QUIC, including in services
 today through Chrome, it is weird for them to oppose OE in HTTP?


Its interesting because of what it says about the actual options instead of
the arguments we make about them.

Google is trying hard to be https:// everywhere and yet they still have to
run http:// services. That illustrates how hard a full transition is - most
people can't match the kind of resources to spend on the problem that
google has, and yet google hasn't been 100% successful. The rest of the web
does far worse - heck we just launched our new h.264 Cisco addon download
over http:// (with an external integrity check).

When running http:// google has twice made an engineering decision to do so
with OE and something better than h1. The result is better than
plaintext-h1 and we should also be striving to bring our users and the
whole web the same benefits. This site runs better in Chrome sucks.

What we're going to do is make https better faster and cheaper as the long
play to ubiquitous real security, and in the short term offer folks more
encryption and better transports on http:// too because we hope to reach
more of them that way. Plaintext is the last choice and is maintained
strictly for compatibility - nobody wins when we do that.

-P

[I think we're firmly into the recycling phase again :)]
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS

2014-11-19 Thread Patrick McManus
On Wed, Nov 19, 2014 at 1:45 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:


 Does Akamai's logo appearing on the Let's Encrypt announcements change
 Akamai's need for OE? (Seems *really* weird if not.)


let's encrypt is awesome - more https is awesome.

The availability of let's encrypt (or something like it) was certainly
taken into consideration in the OE thinking. The idea has been kicking
around for a while from lots of orgs so it was forseeable someone would
pull it off - but huge kudos to our partnership for doing it as that really
is powerful and will help the web. Its also a feather in Mozilla's cap. I'm
really excited about it.

OE plus Let's Encrypt is exactly the manifestation of walking and chewing
gum at the same time that I referred to earlier. We're working hard at this
to improve things on multiple fronts and the ideas are not at odds with
each other.

Ciphertext as the new plaintext is meant to cover situations where people
won't run https. Kudos for let's encrypt helping make that a smaller
market, but it doesn't solve all the use cases of http:// (nor does OE -
but it reaches potentially more of them). These include legacy content and
urls, third-party mixed content, regulatory compliance, CA-risk, non-access
to webpki.

A hosting or CDN provider doesn't control all of those things - especially
the legacy and mixed content. But they can compatibly improve the transport
experience and they're interested in doing that. So to answer your question
without having a partner discussion on dev-platform, the folks interested
in deploying OE foresaw let's encrypt (or something like it) and are still
interested in OE.

There are basically 2 arguments against OE here: 1] you don't need OE
because everyone can run https and 2] OE somehow undermines https

I don't buy them because [1] remains a substantial body of data and [2] is
unsubstantiated speculation and borders on untested FUD.

I understand that google is the loudest voice - yet these realities impact
them as well if you look at their actions on google.com. Google, despite
being the leading industry player in making admirable herculean efforts at
deploying sophisticated https, still also runs lots of http:// services
such as nosslsearch, gstatic, and google-analytics. The cost of a cert
isn't what is holding them back from making those services https only - and
they are the best case scenario for a party being both interested and
capable.

fwiw - nobody would be happier than me if [1] dwindled to 0 and OE was
moot, I just think it will be a super long time in coming and in the
interim we can substitute some of that plaintext with ciphertext and that's
a win for our users.

-P
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Patrick McManus
On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi
wrote:

 The part that's hard to accept is: Why is the countermeasure
 considered effective for attacks like these, when the level of how
 active the MITM needs to be to foil the countermeasure (by
 inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
 is less than the level of active these MITMs already are when they
 inject new HTTP/1.1 headers or inject JS into HTML?



There are a few pieces here -
1] I totally expect what you describe about signalling stripping to happen
to some subset of the traffic, but an active cleartext carrier based MITM
is not the only opponent. Many of these systems are tee'd read only
dragnets. Especially the less sophisticated scenarios.
1a] not all of the signalling happens in band especially wrt mobility.
2] When the basic ciphertext technology is proven, I expect to see other
ways to signal its use.

I casually mentioned a tofu pin yesterday and you were rightly concerned
about pin fragility - but in this case the pin needn't be hard fail (and
pin was a poor word choice) - its an indicator to try OE. That can be
downgraded if you start actively resetting 443, sure - but that's a much
bigger step to take that may result in generally giving users of your
network a bad experience.

And if you go down this road you find all manner of other interesting ways
to bootstrap OE - especially if what you are bootstrapping is an
opportunistic effort that looks a lot like https on the wire: gossip
distribution of known origins, optimistic attempts on your top-N frecency
sites, DNS (sec?).. even h2 https sessions can be used to carry http
schemed traffic (the h2 protocol finally explicitly carries scheme as part
of the transaction instead of making all transactions on the same
connection carry the same scheme) which might be a very good thing for
folks with mixed content problems. Most of this can be explored
asynchronously at the cost of some plaintext usage in the interim. Its
opportunistic afterall.

There is certainly some cat and mouse here - as Martin says, its really
just a small piece. I don't think of it as more than replacing some
plaintext with some encryption - that's not perfection, but I really do
think its significant.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-13 Thread Patrick McManus
I haven't really waded into this iteration of the discussion because there
isn't really new information to talk about. But I know everyone is acting
in good faith so I'll offer my pov again. We're all trying to serve our
users and the Internet - same team :)

OE means ciphertext is the new plaintext. This is a transport detail.

Of course https:// is more secure than http:// of any form. This isn't
controversial - OE proponents believe this too :) Its a matter of opinion
exactly how common, comprehensive, and easy downgrade to cleartext will be
in practice - but its trivially easy to show an existence proof. Therefore,
given the choice, you should be running https://. full stop.

However, in my opinion https deployment is not trivially easy to do all the
time and in all environments and as a result tls based ciphertext is an
improvement on the defacto cleartext alternative.

Particularly at scale using forward-secret suites mixed in with https://
traffic it creates an obstacle to dragnet interception. tofu pinning is
another possibility that helps especially wrt mobility. Its a matter of
opinion how big of an obstacle that is. I get feedback from people that I
know are collecting cleartext right now that don't want us to do it. That's
encouraging.

https:// has seen very welcome growth - but ilya's post is a bit generous
in its implications on that front and even the most optimistic reading
leaves tons of plaintext http://. If you measure by HTTP transaction you
get an amount of https in the mid 50%'s (this is closer to Ilya's approach)
and our metrics match the post about chrome.. However,  if you measure by
page load or by origin you get numbers much  much lower with slower growth.
(we have metrics on the former - origin numbers are based on web
crawlers).. if you measure by byte count you start getting ridiculously low
amounts of https. I want to see those numbers higher, we all do, but I also
think that bringing some transport confidentiality to the fraction you
can't bring over to the https:// camp is a useful thing for the
confidentiality of our users and it doesn't ignore the reality of the
situation.

There are lots of reasons people don't run https://. The most unfortunate
one, that OE doesn't help with in any sense, is that this choice is wholly
in the hands of the content operator when the cost of confidentiality loss
is borne at least partially (and perhaps completely) by the user. But
that's not the only reason - mixed content, cert management, application
integration, sni problems, pki distrust, ocsp risk, and legacy markup are
just various parts of the story why some content owners don't deploy
https://. OE can help with those - those sites aren't run by folks with
google.com like resources to overcome them all. There are other barriers OE
can't help with such as hosting premium charges.

Its a false dichotomy to suggest we can't work on mitigations to those
problems to encourage https and also provide OE for scenarios that can't be
satisfied that way. This isn't hypothetical - we absolutely are both
walking and chewing gum at the same time already on this front.

I don't really believe many in the position to choose between OE and https
would choose OE - I expect it to be used by the folks that can't quite get
there. OE doesn't change the semantics of web security, so if I'm wrong
about OE's relationship to https transition rates we can disable it - it
has no semantic meaning to worry about compatibility with: ciphertext is
the new plaintext but the web (security and other) model is completely
unchanged as this is a transport detail. Reversion is effectively a safety
valve that I would have no problem using if it were necessary.

Thanks.

-Patrick

On Wed, Nov 12, 2014 at 8:23 PM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 On Wed, Nov 12, 2014 at 11:12 PM, Richard Barnes rbar...@mozilla.com
 wrote:
 
  On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl
 wrote:
 
  On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote:
  The whole line of argumentation that web browsers and servers should be
  taking advantage of opportunistic encryption is explicitly informed by
  what's actually happening elsewhere. Because what's *actually*
 happening
  is an overly-broad dragnet of personal information by a wide variety
 of both
  private and governmental agencies -- activities that would be
 prohibitively
  expensive in the face of opportunistic encryption.
 
  ISPs are doing it already it turns out. Governments getting to ISPs
  has already happened. I think continuing to support opportunistic
  encryption in Firefox and the IETF is harmful to our mission.
 
  You're missing Adam's point.  From the attacker's perspective,
 opportunistic sessions are indistinguishable from

 I assume you meant to say indistinguishable from https sessions, so
 the MITM risks breaking some https sessions in a noticeable way if the
 MITM tries to inject itself into an opportunistic session.

 

Re: Git - Hg workflows?

2014-10-31 Thread Patrick McManus
I use git day to day. I use hg primarily for landing code and hg bzepxort.

On Fri, Oct 31, 2014 at 1:48 AM, Gregory Szorc g...@mozilla.com wrote:

 I
 I'm interested in knowing how people feel about these hidden hg tools.
 Is going through a hidden, local hg bridge seamless? Satisfactory? Barely
 tolerable? A horrible pain point? (I noticed some of the hg interactions in
 moz-git-tools aren't optimal. If these are important tools, please ping me
 off list so I can help you improve them.)


I use some older scripts nick hurley wrote to push to try from git.. they
are basically the same model - hidden local hg bridge.

I've always been too cowardly to use them to push to anything more than
try, but I do rely on them heavily for that purpose. They work but are
awkward and often take a long time before figuring out I need a git fetch
--all for them to find the right context to push on. There is some
tolerable pain. Nonetheless I'm thrilled to have them!

I use git format-patch and import patches into an hq queue and push them to
inbound when I'm really landing things. I do something similar to upload
patches to bugzilla. Just the other day I got burnt for the first time by
having two separate workflows - I pushed the wrong patch from an hg queue
to inbound that didn't match my try-certified patch in git. It was operator
error - but it wouldn't have happened if I had just one branch named with
that bug # :)

Overall, how happy are you with your Git fetch/push workflows? Short of
 switching the canonical repositories to Git, what do you need to be more
 productive?


I'd like to be able to push to git.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-09 Thread Patrick McManus
 OK. So it can work if every browser that supports the format puts in in
 Accept: as soon as it begins support. That may be true of WebP; I don't
 believe it's true of WOFF. Is it?


you need to opt-in to the transcoding, yes. But you make it sound like you
can't use woff at all without transcoding, and that's not true. Doing the
right http thing doesn't interfere with also doing the right css thing.
Indeed we've been using woff with that crazy text/xml header - just
changing it to reflect our true preferences enables both scenarios.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 6:10 AM, Gervase Markham g...@mozilla.org wrote:

 On 07/10/14 14:53, Patrick McManus wrote:
  content format negotiation is what accept is meant to do. Protocol level
  negotiation also allows designated intermediaries to potentially
 transcode
  between formats.

 Do you know of any software which transcodes font formats on the fly as
 they move across the network?


I'm not aware of font negotiation - but negotiation is most useful when
introducing new types (such as woff2). The google compression proxy already
does exactly that for images and people are successfully using the AWS
cloudfront proxy in environments where the same thing is done. Accept is
used to opt-in to webp on those services and that allows them to avoid
doing UA sniffing. They don't normally give firefox webp, but if you make
an add-on that changes the accept header to include webp they will serve
firefox that format. That's what we want to encourage instead of UA
sniffing.



  imo you should add woff2 to the accept header.


as with webp, this is particularly useful to opt-in to a new format. I
agree that as a list of legacy formats and q-values is all rather useless,
but as a signal that you want something new that might not be widely
implemented its a pretty good thing. In this case its certainly better than
the txt/html based header being used.


 Do you know of any software which pays attention to this header?


above.

http request header byte counts aren't something to be super concerned with
within reason (uris, cookies, and congestion control pretty much determine
your performance fate on the request side). And it sounds like wrt fonts
the accept header could be made more relevant and actually smaller as well.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 11:18 AM, Jonathan Kew jfkth...@gmail.com wrote:


 So the negotiation is handled within the browser, on the basis of the
 information provided in the CSS stylesheet, *prior* to sending any request
 for an actual font resource.


I'm not advocating that we don't do the css bits too. That's all cool.
Jonas's suggestion was also adding an appropriate accept bit.


 Given that this is the established model, defined in the spec for
 @font-face and implemented all over the place, I don't see much value in
 adding things to the Accept header for the actual font resource request.


intermediaries, as I mentioned before, are a big reason. It provides an
opt-in opportunity for transcoding where appropriate (and I'm not claiming
I'm up to speed on the ins and outs of font coding).

y'all can do what you want - but using protocol negotiation in addition to
the css negotiation is imo a good thing for the web.


 FWIW, when DNT was being created HTTP request header byte count seemed to
 be a pretty strong concern, which (AIUI) was why we ended up with DNT: 1
 rather than something clearer like DoNotTrack: true.


I know - but I disagree pretty strongly with the analysis there. The impact
is extremely marginal... and trust me, I'm very interested in HTTP
performance :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 11:44 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Wed, Oct 8, 2014 at 5:34 PM, Patrick McManus mcma...@ducksong.com
 wrote:
  intermediaries, as I mentioned before, are a big reason. It provides an
  opt-in opportunity for transcoding where appropriate (and I'm not
 claiming
  I'm up to speed on the ins and outs of font coding).

 If the format is negotiated client-side before a URL is fetched,
 that's not going to help, is it?


scenario - origin only enumerates ttf in the css, client requests ttf
(accept: woff2, */*), intermediary transcodes to woff2 assuming such a
transcoding is a meaningful operation.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 12:03 PM, Jonathan Kew jfkth...@gmail.com wrote:

 Possible in theory, I guess; unlikely in practice. The compression
 algorithm used in WOFF2 is extremely asymmetrical, offering fast decoding
 but at the cost of slow encoding. The intent is that a large library like
 Google Fonts can pre-compress their fonts offline, and then benefit from
 serving smaller files; it's not expected to be suitable for on-the-fly
 compression.



accelerators like cloudflare and mod_pagespeed/mod_proxy exist to do this
kind of general thing as reverse proxies for specific origins.. they can
cache the transcoding locally. Obviously that's a lot harder for forward
proxies to do. Reverse proxies are often the termination of https:// as
well - so this transformation remains relevant in the https world we want.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-07 Thread Patrick McManus
content format negotiation is what accept is meant to do. Protocol level
negotiation also allows designated intermediaries to potentially transcode
between formats. imo you should add woff2 to the accept header.

On Tue, Oct 7, 2014 at 9:39 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 On Fri, Oct 3, 2014 at 3:11 AM, Jonas Sicking jo...@sicking.cc wrote:
@font-face {
  font-family: MyFont;
  src: url(myfont.woff2) format(woff2),
   url(myfont.woff) format(woff),
   url(myfont.eot) format(embedded-opentype),
   url(myfont.ttf) format(truetype);
}
 
  Could we at least add woff2 to the Accept header when fetching fonts?

 Why? The CSS-level negotiation feature shown above works great and
 doesn't involve any HTTP-level varying. (Also, like Anne says, fonts
 MIME types are a sad story.)

 I think we should treat Accept in general as a legacy mistake and not
 try to make it do new tricks.

 --
 Henri Sivonen
 hsivo...@hsivonen.fi
 https://hsivonen.fi/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Patrick McManus
On Fri, Sep 12, 2014 at 1:55 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 tion to https
 that obtaining, provisioning and replacing certificates is too
 expensive.


Related concepts are at the core of why I'm going to give Opportunistic
Security a try with http/2. The issues you cite are real issues in
practice, but they become magnified in other environments where the PKI
doesn't apply well (e.g. behind firewalls, in embedded devices, etc..)..
and then, perhaps most convincingly for me, there remains a lot of legacy
web content that can't easily migrate to vanilla https:// schemes we all
want them to run (e.g. third party dependencies or SNI dependencies) and
this is a compatibility measure for them.

Personally I expect any failure mode here will be that nobody uses it, not
that it drives out https. But establishment is all transparent to the web
security model and asynchronous, so if that does happen we can easily
remove support. The potential upside is that a lot of http:// traffic will
be encrypted and protected against passive monitoring.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Disabling auto-play videos on mobile networks/devices?

2014-08-26 Thread Patrick McManus
On Mon, Aug 25, 2014 at 3:03 AM, Justin Dolske dol...@mozilla.com wrote:

 I think it would make a lot of sense to have an explicit low bandwidth
 mode that did stuff like this, instead of trying to address it piecemeal.
 There's all kinds of stuff that can consume bandwidth, and if we think it's
 a real concern then let's directly address it.


I think that's a pretty cool idea - I'm on vacation for another week, can
you file a bug and cc: me? Or I'll do it when I come back.

There are a couple technical gotchas involved in that - basically to do it
effectively you need to open a connection with a small window (which is
fine).. if you want the connection to run faster you need to open the
window back up - and some msft OS's (actually maybe all of them) won't let
you open it past 64KB if you start with an intentionally small one and
that's not enough for full line rate on a lot of networks. But that should
be manageable for this use case.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Prerendering API

2014-08-11 Thread Patrick McManus
an obvious tie in here is the network predictor (formerly 'seer') work Nick
Hurley has been doing. Basically already working on the what to fetch
next questions, but not the rendering parts.


On Mon, Aug 11, 2014 at 6:40 PM, Karl Dubost kdub...@mozilla.com wrote:


 Le 12 août 2014 à 07:03, Jonas Sicking jo...@sicking.cc a écrit :
  * A use-case that we came upon pretty quickly when we were looking at
  using prerendering in FirefoxOS is that we might not know the exact
  URL that the user is likely to navigate to, but we have a pretty good
  guess about what template page it's going to be.

 If I remember bits of Google strategy, it was basically tied to their
 proxy servers where they basically know statically which page the users are
 most likely to click next.

 There are also some logical next steps:

 * Infinite loading through XHR
 * Next items in a list of search results, gallery, etc. Anything
 sequential. (Next, Prev)

 A bit more far fetched but maybe worth thinking for a version 2.0
 heuristics. If it's only client-side, users could set up their browser to
 collect the browsing habits for the most common sites (*without ever
 sending it back to a server*). With this browsing habits sequence, it would
 be possible to know for the browser that most of the time the user follows
 a certain pattern and pretender the pages. Additional possible benefits:
 User understands his/her own patterns through a dashboard.

 --
 Karl Dubost, Mozilla
 http://www.la-grange.net/karl/moz

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NS_ERROR_NET_PARTIAL_TRANSFER

2014-04-23 Thread Patrick McManus
I want to highlight why Its also an important change - there is a very real
and important error: your channel content is truncated. Its a bug that
necko doesn't tell you about that right now. So we're going to fix that up.

The download manager is the obvious victim of this right now. It declares
some classes of interrupted downloads as OK. You can't use or restart the
result.

Any channel over TLS that thinks it has integrity (scripts? css? someone's
xhr?) because of that can be silently truncated with the current bug.

Consumers that want to ignore the error (from the bug that is at least
docshell, and Daniel is asking primarily about images here I believe) will
need to handle the exception..

There is no change in the amount of data returned.. just OK -
NEW_ERROR_CODE.


On Tue, Apr 22, 2014 at 10:06 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2014-04-22, 9:59 PM, Boris Zbarsky wrote:

 On 4/22/14, 9:30 PM, Ehsan Akhgari wrote:

 Do we currently return NS_OK from Necko in such circumstances or another
 error code?


 Currently we return NS_OK, so the necko client thinks the transfer
 completed successfully.


 That seems like a huge behavior change. :(


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New necko cache?

2014-02-19 Thread Patrick McManus
+cc


On Tue, Feb 18, 2014 at 7:56 PM, Neil n...@parkwaycc.co.uk wrote:

 Where can I find documentation for the new necko cache? So far I've only
 turned up some draft planning documents. In particular, I understand that
 there is a preference to toggle the cache. What does application code have
 to do in order to work with whichever cache has been enabled?

 --
 Warning: May contain traces of nuts.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Patrick McManus
Typically I have to choose between
 1] 80 columns
 2] descriptive and non-abbreviated naming
 3] displaying a logic block without scrolling

to me, #1 is the least valuable.



On Tue, Jan 7, 2014 at 4:51 PM, Jim Porter jpor...@mozilla.com wrote:

 On 01/06/2014 08:23 PM, Karl Tomlinson wrote:

 Yes, those are the sensible options.

 Wrapping at  80 columns just makes things worse for those that
 like to save some screen room for something else, view code on a
 mobile device, etc.


 I for one prefer wrapping at 80 columns because with my font settings, I
 can have 3 buffers open side-by-side. I generally find that a lot more
 useful than the vertical space that would be saved by wrapping at, say, 100
 columns.

 - Jim


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Patrick McManus
I'm fine with enforcing a gecko wide coding style as long as it comes with
cross platform tools to act as arbiter.. it is something that needs to be
automated and isn't worth the effort of trying to get everybody on the same
page by best effort.



On Mon, Jan 6, 2014 at 5:41 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 FWIW I should mention explicitly that I support this proposal.  The only
 big thing that I wish to see changed here is to remove the exception for
 js/* but I can live with that exception being lifted in the future.

 Cheers,
 Ehsan


 On 1/5/2014, 9:34 PM, Nicholas Nethercote wrote:

 We've had some recent discussions about code style. I have a propasal

 For the purpose of this proposal I will assume that there is consensus on
 the
 following ideas.

 - Having multiple code styles is bad.

 - Therefore, reducing the number of code styles in our code is a win
 (though
there are some caveats relating to how we get to that state, which I
 discuss
below).

 - The standard Mozilla style is good enough. (It's not perfect, and it
 should
continue to evolve, but if you have any pet peeves please mention them
 in a
different thread to this one.)

 With these ideas in mind, a goal is clear: convert non-Mozilla-style code
 to
 Mozilla-style code, within reason.

 There are two notions that block this goal.

 - Our rule of thumb is to follow existing style in a file. From the style
guide:

The following norms should be followed for new code, and for Tower of
 Babel
code that needs cleanup. For existing code, use the prevailing style
 in a
file or module, or ask the owner if you are on someone else's turf and
 it's
not clear what style to use.

This implies that large-scale changes to convert existing code to
 standard
style are discouraged. (I'd be interested to hear if people think this
implication is incorrect, though in my experience it is not.)

I propose that we officially remove this implicit discouragement, and
 even
encourage changes that convert non-Mozilla-style code to Mozilla-style
 (with
some exceptions; see below). When modifying badly-styled code,
 following
existing style is still probably best.

However, large-scale style fixes have the following downsides.

- They complicate |hg blame|, but plenty of existing refactorings (e.g.
  removing old types) have done likewise, and these are bearable if
 they
  aren't too common. Therefore, style conversions should do entire
 files in
  a single patch, where possible, and such patches should not make any
  non-style changes. (However, to ease reviewing, it might be worth
  putting fixes to separate style problems in separate patches. E.g.
 all
  indentation fixes could be in one patch, separate from other changes.
  These would be combined before landing. See bug 956199 for an
 example.)

- They can bitrot patches. This is hard to avoid.

However, I imagine changes would happen in a piecemeal fashion, e.g.
 one
module or directory at a time, or even one file at a time. (Again, see
 bug
956199 for an example.) A gigantic change-all-the-code patch seems
unrealistic.

 - There is an semi-official policy that the owner of a module can dictate
 its
style. Examples: SpiderMonkey, Storage, MFBT.

There appears to be no good reason for this and I propose we remove it.
Possibly with the exception of SpiderMonkey (and XPConnect?), due to
 it being
an old and large module with its own well-established style.

Also, we probably shouldn't change the style of imported third-party
 code;
even if we aren't tracking upstream, we might still want to trade
 patches.
(Indeed, it might even be worth having some kind of marking at the top
 of
files to indicate this, a bit like a modeline?)

 Finally, this is a proposal only to reduce the number of styles in our
 codebase. There are other ideas floating around, such as using automated
 tools
 to enforce consistency, but I consider them orthogonal to or
 follow-ups/refinements of this proposal -- nothing can happen unless we
 agree
 on a direction (fewer styles!) and a way to move in that direction
 (non-trivial
 style changes are ok!)

 Nick
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-06 Thread Patrick McManus
I strongly prefer at least a 100 character per line limit. Technology
marches on.


On Mon, Jan 6, 2014 at 9:23 PM, Karl Tomlinson mozn...@karlt.net wrote:

 L. David Baron writes:

  I tend to think that we should either:
   * stick to 80
   * require no wrapping, meaning that comments must be one paragraph
 per line, boolean conditions must all be single line, and assume
 that people will deal, using an editor that handles such code
 usefully

 Yes, those are the sensible options.

 Wrapping at  80 columns just makes things worse for those that
 like to save some screen room for something else, view code on a
 mobile device, etc.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recent build time improvements due to unified sources

2013-11-20 Thread Patrick McManus
I was skeptical of this work - so I need to say now that it is paying
dividends bigger and faster than I thought it could. very nice!


On Wed, Nov 20, 2013 at 3:38 AM, Nicholas Nethercote n.netherc...@gmail.com
 wrote:

 On September 12, a debug clobber build on my new Linux desktop took
 12.7 minutes.  Just then it took 7.5 minutes.  Woo!

 Nick
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster builds, now.

2013-10-02 Thread Patrick McManus
this works great for me.. touching network/protocol/http/nsHttpChannel.cpp
and rebuilding with mach build binaries runs in 26 seconds compared to 61
with just mach build, and I see the same ~35 second savings when doing it
on a total nop build (39 vs 5). awesome.

-P



On Tue, Oct 1, 2013 at 9:17 PM, Mike Hommey m...@glandium.org wrote:

 Hi,

 If you've read the You want faster builds, don't you thread, you may
 know that some build improvements have recently landed.

 I just landed the most important part of it all, and we should now be in
 a much better place, but, as I'm very cautious, and as this is
 incremental improvements to an existing complex build system that is
 hard to improve all at once without some subtle breakages, this is
 opt-in. It also doesn't work with pymake because of bug 918652.

 At this point, you probably want to know what it is and how to use it.

 There is now a new target for incremental C/C++ rebuilds. What this means
 is, you build once like usual. Then after you do your C/C++ changes,
 instead of:
   - mach build or make -C objdir, which takes forever
   - mach build subdirectory/of/the/changes, which sometimes rebuilds
 toolkit/library, sometimes not, depending what you're rebuilding.
   - make -C objdir/subdirectory/of/the/changes  make -C
 objdir/toolkit/library, which may actually not be enough.
 you can now do:
   - mach build binaries
 or
   - make -C objdir binaries

 It will rebuild your changes and everything that needs rebuilding because
 of them. It will also do that quickly.

 There are a few caveats:
 - it only handles C/C++ changes, including headers. It doesn't handle js
   modules, chrome data, etc.
 - it does *not* handle changes to xpidl, webidl, ipdl. yet. There's a
   followup for this to happen: bug 921309.
 - it doesn't handle changes to nss, nspr, icu or ffi. If you do changes
   there, you still need to run a normal build.
 - it doesn't work without doing a normal build first.
 - while it shouldn't break your builds, it might subtly skip what you
   would expect it to build. If it does, please file a bug or contact me
   on irc. You can still use the old ways until your issues are fixed.

 Something else that I landed today is support to skip directories during
 a normal build when they're not relevant to the build. As always, I'm
 overcautious and this is opt-in. If you want to opt-in for this (and
 future experimental improvements), please add export
 MOZ_PSEUDO_DERECURSE=1 to your mozconfig. Except if you're using
 pymake, sadly. The more people test those experimental improvements, the
 quicker they can become the default for everyone.

 For those interested in the gory details, I'll post some on my blog within
 the next few days.

 Mike
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing Gecko's URL parser

2013-07-01 Thread Patrick McManus
On Mon, Jul 1, 2013 at 12:43 PM, Anne van Kesteren ann...@annevk.nl wrote:

 I'd like to discuss the implications of replacing/morphing Gecko's URL
 parser with/into something that conforms to
 http://url.spec.whatwg.org/


I know its not your motivation, but the lack of thread safety in the
various nsIURIs is a common roadblock for me and something I'd love to see
solved in a rewrite... but as benjamin mentions there are a lot of
pre-existing implementations..
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-26 Thread Patrick McManus
On Wed, Jun 26, 2013 at 2:07 PM, Gavin Sharp ga...@gavinsharp.com wrote:

 The scope of the current proposal is what's being debated; I don't think
 there's shared agreement that the scope should be detectable from web
 script.


Partially embedded in this discussion is the notion that the open web
requires coordination in all web facing things. Mozilla should seek
partners and consensus, seek to be an honest broker, consider the imprint
of our footsteps, and be public in all we do. I'm on board with that idea
in the networking space - and if we think that as a statement of principle
it is an important thing to document - let's do so!

But the underlying spirit of the proposal seems to assume a problem that
isn't in evidence beyond the webapi space. I spend my days in roughly equal
parts with the IETF, with my team, with our code, and with implementers
outside of gecko doing interop (both clients and servers, which is a bit of
a different working relationship than webapi faces). I'm fortunate to work
with some very cooperative folks both in industry and academia and there is
strong awareness of the need to balance innovation against fragmentation.
If anything, I think we (as an industry) rock too few boats for the overall
health of the web.

Therefore I disagree with the relevance of the proposal's bureaucracy to
non webapi work. Obviously web idl reviewers, js team members, and
blink-coordinated-mailing-lists aren't the primary stake holders in a
discussion of congestion control algorithms, tls options, or data-on-syn
approaches.

If we think red tape beyond a statement of principle is really needed for
non webapi spaces, then its probably best to fork the proposal into other
module specific documents and let those proceed in parallel.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Patrick McManus
I don't really think there is a controversy here network wise - mostly
applicability is a case of I know it when I see it and the emphasis here
is on things that are exposed at the webdev level is the right thing.
Sometimes that's markup, sometimes that's header names which can touch on
core protocol topics (but usually don't)... and just because
Content-Security-Policy probably is one of those things, it doesn't mean
every other header is too. That's a fine clarification that I think is
already in sync with the proposal.

Brian and I deal with a lot of things that can be negotiated on a protocol
level and therefore don't have to stand the test of time because they
aren't baked into markup or semantics that need to live forever... when
done well they have built in fallbacks that allow iteration and real
experience before baking them into standards. That's awesome for the
Internet and I don't sense anybody on this thread trying to disrupt that.

So while we should be experimenting with 50 different implementations to
make your network traffic faster and more secure we shouldn't be as freely
messing with the semantics of that. And I think we're decent at that.. an
interesting case in point is SPDY - I have a stack of requests from folks
doing sophisticated Real User Monitoring/RUM (e.g. Akamai) that want to be
able to track whether or not a page was loaded with some version of spdy
and therefore need some kind of content-js accessible indicator. Its
totally reasonable. But, while I have totally rearranged everything about
the network transfer with spdy and will probably be doing it again shortly,
I've been hesitant to add a small bit of markup to the DOM that might
fragment markup and javascript without some effort at standardization.
(Chrome has a mechanism if anybody is interested in taking that topic up
fwiw.).



On Tue, Jun 25, 2013 at 10:11 AM, Brian Smith bsm...@mozilla.com wrote:

 Robert O'Callahan wrote:
  On Tue, Jun 25, 2013 at 3:08 PM, Brian Smith bsm...@mozilla.com wrote:
 
   At the same time, I doubt such a policy is necessary or helpful for the
   modules that I am owner/peer of (PSM/Necko), at least at this time. In
   fact, though I haven't thought about it deeply, most of the recent
 evidence
   I've observed indicates that such a policy would be very harmful if
 applied
   to network and cryptographic protocol design and deployment, at least.
  
 
  I think you should elaborate, because I think we should have consistent
  policy across products and modules.

 I don't think that you or I should try to block this proposal on the
 grounds that it must be reworked to be sensible to apply to all modules,
 especially when the document already says that that is a non-goal and
 already explicitly calls out some modules to which it does not apply: Note
 that at this time, we are specifically focusing on new JS APIs and not on
 CSS, WebGL, WebRTC, or other existing features/properties.

 Somebody clarified privately that many DOM/JS APIs don't live in the DOM
 module. So, let me rework my request a little bit. In the document, instead
 of creating a blacklist of web technologies to which the new policy would
 not apply (CSS, WebGL, WebRTC, etc.), please list the modules to which the
 policy would apply.

 It seems (from the subject line on this thread, the title of the proposal,
 and the text of the proposal) that the things I work on are probably
 intended to be out of scope of the proposal. That's the thing I want
 clarification on. If it is intended that the stuff I work on (networking
 protocols, security protocols, and network security protocols) be covered
 by the policy, then I will reluctantly debate that after the end of the
 quarter. (I have many things to finish this week to Q2 goals.)

 Cheers,
 Brian

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Awesome Quantile Telemetry Plots on metrics.mozilla.com

2013-04-02 Thread Patrick McManus
Today I noticed some (relatively) new CDF plots of telemetry histogram
data on metrics.mozilla.com. Maybe in the last week or so?

This makes it much easier to determine medians and 90th percentiles -
which is a very common use case for me. If you haven't seen it I
recommend checking it out.

If, dear reader, you are responsible then thank you! I didn't know who to thank.

-Patrick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What data do you want from telemetry (was Re: improving access to telemetry data)

2013-02-28 Thread Patrick McManus
On Thu, Feb 28, 2013 at 10:36 AM, Benjamin Smedberg
benja...@smedbergs.us wrote:

 Cool. Perhaps we should start out with collecting stories/examples:


In that spirit:

What I almost always want to do is simply for the last N days of
variable X show me a CDF (at even just 10 percentile granularity) for
the histogram and let me break that down by sets of build id and or
OS. That's it.

For instance - what is my median time to ready for HTTP vs HTTPs
connections (I've got data for both of those)? What about their tails?
How did they change based on some checkin I'm interested in? Not
rocket science - but incredibly painful to even approximate in the
current front end.. you can kind of do it, but with a bunch of fudging
and manual addition required and it takes forever. I'll admit I get
frustrated with all the talk of EC and Hadoop and what-not when it
really seems a rather straightforward task for me to script on the
data.

Gimme the data set and I can just script it instead of spending an
hour laboriously clicking on things and waiting 15 seconds for every
click.

Reports from the front end seem to indicate that there are 60 Million
submissions in the last month across all channels for one of the
things I'm tracking.. 651K of those from nightly. fwiw.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform