Re: intent to unship: HPKP (dynamic key pinning)

2019-11-20 Thread alex . gaynor
Hi Dana,

One thing I don't see mentioned here is certificate transparency, which, while 
not a 1:1 replacement, nevertheless strongly contributes to the same goal of 
control over issuance.

Is there a plan to implement SCT verification in Firefox, similar to what 
Chrome and Apple have shipped? In either event, it sounds like the plan to 
remove HPKP is not contingent on the answer on CT, correct?

Alex

On Sunday, November 17, 2019 at 9:16:56 PM UTC-5, Dana Keeler wrote:
> The breadth of the web public key infrastructure (PKI) is both an asset 
> and a risk. Websites have a wide range of certificate authorities (CAs) 
> to choose from to obtain certificates for their domains. As a 
> consequence, attackers also have a wide range of potential targets to 
> try to exploit to get a mis-issued certificate. The HTTP Public Key 
> Pinning (HPKP) [0] header was intended to allow individual sites to 
> restrict the web PKI to a subset as it applies to their domains, thus 
> decreasing their exposure to compromised CAs.
> Unfortunately, HPKP has seen little adoption, largely because it has 
> proved to be too dangerous to use. There are a number of scenarios that 
> can render websites inoperable, even if they themselves don't use the 
> header. Chrome removed support for it in version 72 in January of this 
> year [1]. According to our compatibility information, Opera, Android 
> webview, and Samsung Internet are the only other implementations that 
> support the header [2]. At this point, it represents too much of a risk 
> to continue to enable in Firefox.
> A related mechanism, DNS Certification Authority Authorization (CAA) 
> [3], also allows websites to restrict which CAs can issue certificates 
> for their domains. This has seen much larger adoption and does not 
> suffer from the drawbacks of HPKP.
> Earlier today, bug 1412438 [4] landed in Firefox Nightly [5] to disable 
> HPKP via a preference. New HPKP headers will not be processed, and 
> previously-cached HPKP information will not be consulted.
> The static list of key pinning information that ships with Firefox is 
> still enabled, and these pins will still be enforced.
> 
> [0] https://tools.ietf.org/html/rfc7469
> [1] https://www.chromestatus.com/feature/5903385005916160
> [2] https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key_Pinning
> [3] https://tools.ietf.org/html/rfc6844
> [4] https://bugzilla.mozilla.org/show_bug.cgi?id=1412438
> [5] Coincidentally, version 72

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent-to-Ship: Backward-Compatibility FIDO U2F support for Google Accounts

2019-03-26 Thread Alex Gaynor
On Tue, Mar 26, 2019 at 3:46 PM J.C. Jones  wrote:

> (Sorry for the delay in replying, had a long-weekend of PTO there)
>
> On Thu, Mar 21, 2019 at 7:08 AM Henri Sivonen 
> wrote:
>
> > On Thu, Mar 14, 2019 at 8:12 PM J.C. Jones  wrote:
> > > It appears that if we want full security key support for Google
> > > Accounts in Firefox in the near term, we need to graduate our FIDO U2F
> > > API support from “experimental and behind a pref”
> >
> > I think it's problematic to describe something as "experimental" if
> > it's not on path to getting enabled.
>
>  [...]
>
> > So I think it's especially important to move *somewhere* from the
> > "experimental and behind a pref" state: Either to interop with Chrome
> > to the extent required by actual sites (regardless of what's de jure
> > standard) or to clear removal so that the feature doesn't look like
> > sites should just wait for it to get enabled and that the sites expect
> > the user to flip a pref.
> >
>
> To be clear, our FIDO U2F API support is behind a pref since it's 1)
> deprecated in favor of the superior WebAuthn standard, and 2) our
> implementation is bare-bones. I think these points have merit, but not
> enough to justify waiting as long as we have, let alone longer.
>
>
> > As a user, I'd prefer the "interop with Chrome" option.
> >
>
> Okay.
>
>
> > > to either “enabled by default” or “enabled for specific
> > > domains by default.” I am proposing the latter.
> >
> > Why not the former? Won't the latter still make other sites wait in
> > the hope that if they don't change, they'll get onto the list
> > eventually anyway?
> >
>
> It's certainly easier to simply pref-flip the feature on by default. I'm
> not opposed to that, though it leaves Safari as the lone browser that will
> be dragging the ecosystem to move to WebAuthn.
>
> > First, we only implemented the optional Javascript version of the API,
> > > not the required MessagePort implementation [3]. This is mostly
> > > semantics, because everyone actually uses the JS API via a
> > > Google-supplied polyfill called u2f-api.js.
> >
> > Do I understand correctly that the part that is actually needed for
> > interop is implemented?
> >
>
> Basically, yes. (See the caveats in the original message)
>
>
> >
> > > As I’ve tried to establish, I’ve had reasons to resist shipping the
> > > FIDO U2F API in Firefox, and I believe those reasons to be valid.
> > > However, a multi-year delay for the largest security key-enabled web
> > > property is, I think, unreasonable to push upon our users. We should
> > > do what’s necessary to enable full security key support on Google
> > > Accounts as quickly as is  practical.
> >
> > This concern seems to apply to other services as well.
> >
>
>
> > What user-relevant problem is solved by having to add domains to a
> > list compared to making the feature available to all domains?
> >
>
> Last week's abrupt loss of support on Github [0] is a good case in point.
>
> Does anyone here disagree with simply flipping the preference on by default
> to ride the trains in 68?
>
>
Simply flipping the pref, and not including register support seems a bit
unfortunate, as it'll leave some websites in a works-sometimes state. While
some larger sites have UIs and help articles explaining that Firefox works
for login but not reigstering a key, many will not. If it's possible to
include register support in what rides the train, that seems preferable.

It's probably worth flagging that there'll still be some sites which do not
work even with this, since we have a different implementation strategy than
Chrome, and so some feature detection efforts break.

Cheers,
Alex


>
> [0]
>
> https://www.reddit.com/r/firefox/comments/b39eac/github_no_longer_allows_using_security_keys/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent-to-Ship: Backward-Compatibility FIDO U2F support for Google Accounts

2019-03-14 Thread Alex Gaynor
There are a lot of good reasons to oppose this:

- This is a very frustrating _expansion_ of non-standard APIs, more than a
year after we shipped the W3C standard API
- It'll put pressure on other browsers, which were only implementing
webauthn, to also support u2f.js
- It'll prolong the period of having multiple APIs, which I think
contributes to a lot of confusion about the ecosystem
- Once we have the whitelist, there will doubtlessly be other websites who
ask to be placed on it, giving them an excuse to not migrate to webauthn

Having said all of those -- which are real and true -- there's one
overriding concern: phishing, particularly moderately-sophisticated
phishing which can handle forms of 2FA such as TOTP, SMS, or push, is a
scourge. It is brutally effective, and far too cheap to scale. If this is
the price we need to pay to give our users the protections of security
keys, it's worth it. Further, support by default in more browsers will
hopefully be a good thing for ecosystem wide security key adoption.

I desperately hope some combination of Google Accounts, Android, and Chrome
have a strategy for migrating Google Accounts to webauthn before all these
older Androids cycle out. But in the meantime, I don't think it's fair that
that block our users from phishing resilient authenticators. Thanks for
putting this together JS.

Alex

On Thu, Mar 14, 2019 at 2:12 PM J.C. Jones  wrote:

> Web Authentication (WebAuthn) is our best technical response to
> phishing, which is why we’ve championed it as a technology. All major
> browsers either support it already, or have their support in-progress,
> yet adoption by websites has been slow. The deprecated Javascript API
> that WebAuthn replaces, the FIDO U2F API [0], is mostly confined to
> Chromium-based browsers.
>
>
> # tl;dr #
>
> To make security keys work with Google Accounts in the near future, I
> propose enabling our FIDO U2F API for google.com domains, controlled
> by a whitelist preference. Waiting on Google Accounts to fully support
> Web Authentication will probably take too long, since it’s Android
> deployments which are holding them up.
>
>
> # Background #
>
> More than a year ago, I proposed here an interim solution to permit
> Google Accounts to use existing FIDO U2F API credentials in Firefox
> [1] which was implemented in Bug 1436078. We agreed then to implement
> a hard-coded permission for Google Accounts when utilizing FIDO U2F
> API credential support, whether that was via Web Authentication’s
> backward compatibility extension, or via Firefox’s FIDO U2F API
> support hidden behind the “security.webauth.u2f” preference.
>
> We’ve recently learned that Google Accounts has slipped their schedule
> for using Web Authentication to register new credentials. This delay
> is attributed to security key support on Android being, for most
> devices, non-upgradable. WebAuthn is backwards-compatible with
> credentials produced by the FIDO U2F API. However, WebAuthn-produced
> credentials cannot be used with the FIDO U2F API. Because of that,
> credentials created using WebAuthn will never be usable on the
> majority of FIDO U2F-only Android devices currently in circulation.
>
> Due to this issue, there has been an unclear timeline communicated to
> me for when Google Accounts will support registering security keys
> using Web Authentication.
>
>
> # Proposal #
>
> It appears that if we want full security key support for Google
> Accounts in Firefox in the near term, we need to graduate our FIDO U2F
> API support from “experimental and behind a pref” to either “enabled
> by default” or “enabled for specific domains by default.” I am
> proposing the latter.
>
>
> ## Thorny issues in enabling our FIDO U2F API implementation ##
>
> This is not as simple a decision as it might appear. Certainly we want
> to encourage adoption of Web Authentication rather than the FIDO U2F
> API. There have already been sad cases of well-known web properties
> implementing the deprecated standard after we shipped WebAuthn [2].
> There’s also the matter that we haven’t built-out the whole of the
> FIDO U2F API.
>
> Firefox’s implementation of the FIDO U2F API is deliberately incomplete:
>
> First, we only implemented the optional Javascript version of the API,
> not the required MessagePort implementation [3]. This is mostly
> semantics, because everyone actually uses the JS API via a
> Google-supplied polyfill called u2f-api.js. But the specification is
> the specification.
>
> Second, we do not perform the “Trusted Facet List” portions of the
> “Determining if a Caller's FacetID is Authorized for an AppID”
> algorithm [4] from the specification (we stop at step 3). It seems:
> under-specified [5]; of dubious security/privacy advantage [6]; and
> it’s rarely necessary [7].
>
> I don’t intend to invest the engineering time to fix the above issues
> (neither coding nor spec-wrangling). The anti-phishing future is Web
> Authentication, and we should only care about 

Intent to ship: Devirtualizing IPC method calls (bug 1512990)

2019-02-04 Thread Alex Gaynor
Hi dev.platform!

I wanted to let everyone know about some changes to how C++ IPDL actors are
implemented that are currently in the process of being landed (I expect to
land them to autoland tomorrow morning). This message will summarize these
changes, for complete details see
https://bugzilla.mozilla.org/show_bug.cgi?id=1512990.

Currently if you have an PProtocol.ipdl file like:

namespace mozilla {
namespace dom {

async protocol PProtocol {
both:

async Method(ArgType arg);

}

}
}

You'd write a C++ implementation like:

class ProtocolParent : PProtocolParent {
virtual IPCResult RecvMethod(ArgType& arg) override;
}

With the upcoming patches in bug 1512990, for new protocols/methods you'll
need to make the following changes:

1. RecvMethod will no longer be virtual/override - this also applies to
Alloc, Dealloc, and Answer methods.
2. ProtocolParent needs to friend class PProtocolParent (and the same for
the Child side).
3. ProtocolParent should be in an EXPORTS header which is the combination
of the namespace + the protocol name. So for our example, ProtocolParent's
declaration should be in mozilla/dom/ProtocolParent.h

This means in mozilla/dom/ProtocolParent.h you'll have:

class ProtocolParent : PProtocolParent {
friend class PProtocolParent;

IPCResult RecvMethod(ArgType& arg);
}

Why these changes?

These let the IPDL compiler emit direct calls, instead of virtual calls, to
all of these methods. This has three benefits:

a) Theoretically, performance -- though in practice, PGO seems to already
devirtualize these calls
b) Memory -- we no longer need to emit vtables for all of these methods
(~20 KB per process)
c) Ergonomics -- implementing classes will now have control over whether
they take arguments by value or reference

We have ported the majority of protocol implementations to this new way of
being, however some were still left unported for various reasons. You can
see them in ipc/ipdl/ipdl/direct_call.py (
https://phabricator.services.mozilla.com/differential/changeset/?ref=567592).
There we have a list of protocols/classes that still use virtual calls, and
some which merely don't comport to these naming conventions. We'd like to
burn this list down to nothing eventually, so please don't add anything to
it unless absolutely necessary!

If you have a protocol with _multiple_ implementing types that will also
work slightly differently. In that case, instead of having multiple types
which directly subclass PProtocolSide, you should define a single class
which directly inherits PProtocolSide and all of your other implementors
should subclass it. This gives you control over which methods are virtual.

If you have any questions or concerns, please don't hesitate to holler.
I'll be updating the IPDL docs on MDN to reflect these changes.

Cheers,
Alex
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to use Phabricator the correct way

2018-12-26 Thread Alex Gaynor
Hi Soren,

I'm not sure if this is the "correct" way to use phabricator, but it is the
way I use it successfully :-)

I follow basically your steps, except I use them in tandem with the firefox
tree hg extension, and hg bookmarks. So my workflow looks like:

$ # create a new bookmark to work on
$ hg bookmark my-new-features
$ # Do work
$ hg ci -m "blah blah blah"
$ # I use moz-phab instead of arc, but it's the same for the purposes of
this example
$ moz-phab submit
$ # Switch back to centrla
$ hg up central
$ hg bookmark my-other-features
$ # Do work
$ hg ci -m " blah blah blah"
$ moz-phab submit

Hopefully this helps!

On Wed, Dec 26, 2018 at 12:45 PM soeren.hentzschel--- via dev-platform <
dev-platform@lists.mozilla.org> wrote:

> Hi!
>
> I already wrote a few small patches for Firefox but I am still not very
> familiar with Phabricator. And it already happens more than once that I
> messed up with Phabricator requests. Unfortunately the documenation was not
> very helpful for me, I didn't find any information about working on more
> than one bug.
>
> Maybe you could give me some hints how to use Phabricator the correct way?
> =)
>
> In the past, to create a Phabricator request I used the following steps:
>
> hg add 
> hg commit -m 'Bug 12345 - the commit message' 
> arc diff
>
> But assuming I want to work on another bug while waiting for review: If I
> execute the same steps for another bug my previous Phabricator request will
> be updated with the new commit instead of creating a new Phabricator
> request.
>
> I found out that I can use arc diff  if I want to create a new
> Phabricator request without the commits from my previous request.
>
> Is that the correct way to work on more than one bug at the same time? How
> do you work on different bugs at the same time? Do you have any other
> helpful input which helps to avoid mistakes?
>
> Thank you!
> Sören
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: implicit ref=noopener for target=_blank on anchor and area elements

2018-11-21 Thread Alex Gaynor
I'm very excited about this -- in my experience very few developers know
about the dangers of target=_blank.

Do we have any sense of how large the breakage will be, and do we have any
docs for developers who are impacted? (I assume rel=opener is the fix?)

Yay!

Alex

On Wed, Nov 21, 2018 at 3:29 AM Andrea Marchesini 
wrote:

> *Summary*: WebKit is experimenting an interesting feature: target=_blank on
> anchor and area elements implies ref=noopener.
> https://trac.webkit.org/changeset/237144/webkit/
>
> *Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1503681
>
> *Link to standard*: https://github.com/whatwg/html/issues/4078
>
> *Platform coverage*: everywhere.
>
> *Estimated or target release*: 66 - after 1 cycle with this feature enabled
> in nightly and beta only.
>
> *Preference behind which this will be implemented*
> :dom.targetBlankNoOpener.enabled
>
> *Is this feature enabled by default in sandboxed iframes?* yes.
>
> *DevTools bug*: no special support for devtools. Maybe we could dispatch an
> Intervention report via Reporting API.
>
> *Do other browser engines implement this?* Only safari at the moment:
> https://trac.webkit.org/changeset/237144/webkit
>
> *web-platform-tests*: none, yet, but I can convert my mochitest in a WPT
> easily.
>
> *Is this feature restricted to secure contexts?* no needs.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-16 Thread Alex Gaynor
Would it make sense to consider giving nsTHashtable and PLDHashTable
different names? Right now the names suggest that we have 3 general purpose
hash tables, but it might be easier if the names better suggested their
purpose, e.g. PLDHashTable -> MinimalCodeSizeHashTable (I'm sure we can do
better than that).

Similarly, would it make sense to consolidate the APIs, as much as
possible, if the primary differences are around implementation details like
"how much is templated/inlined"?

Alex


On Wed, Aug 15, 2018 at 5:39 AM Nicholas Nethercote 
wrote:

> Hi,
>
> I recently moved Spidermonkey's js::Hash{Map,Set} classes from
> js/public/HashTable.h into mfbt/HashTable.h and renamed them as
> mozilla::Hash{Map,Set}. They can now be used throughout Gecko.
>
> (js/public/HashTable.h still exists in order to provide renamings of
> mozilla::Hash{Map,Set} as js::Hash{Map,Set}.)
>
> Why might you use mozilla::Hash{Map,Set} instead of PLDHashTable (or
> nsTHashtable and other descendants)?
>
> - Better API: these types provide proper HashMap and HashSet instances, and
> (in my opinion) are easier to use.
>
> - Speed: the microbenchmark in xpcom/rust/gtest/bench-collections/Bench.cpp
> shows that mozilla::HashSet is 2x or more faster than PLDHashTable. E.g.
> compare "MozHash" against "PLDHash" in this graph:
>
>
> https://treeherder.mozilla.org/perf.html#/graphs?timerange=604800=mozilla-central,1730159,1,6=mozilla-central,1730162,1,6=mozilla-central,1730164,1,6=mozilla-central,1732092,1,6=mozilla-central,1730163,1,6=mozilla-central,1730160,1,6
>
> Bug 1477627 converted a hot hash table from PLDHashTable to
> mozilla::HashSet and appears to have sped up cycle collection in some cases
> by 7%. If you know of another PLDHashTable that is hot, it might be worth
> converting it to mozilla::HashTable.
>
> Both mozilla::Hash{Map,Set} and PLDHashTable use the same double-hashing
> algorithm; the speed difference is due to mozilla::HashSet's extensive use
> of templating and inlining. The downside of this is that mozilla::HashSet
> will increase binary size more than PLDHashTable.
>
> There are overview comments at the top of mfbt/HashTable.h, and the classes
> themselves have more detailed comments about every method.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Join the ASan Nightly Project!

2018-07-09 Thread Alex Gaynor
Hey Christian,

This is great! I'm super excited.

ASAN helps in another way, besides just giving us much better UAF
diagnostics: it catches issues besides crashes! It's very common for small
buffer overflows to not corrupt things quite enough to crash.

Two small questions:

1) Is there a metabug or anything to follow along with, for folks who
interested to track the success of this?
2) My understanding is that we don't have macOS-ASAN-nightly (which, as a
macOS user, I'm interested in) because we don't have enough macOS CI
capacity to add ASAN builds to all pushes (as we do with Linux and Windows
ASAN), is that right?

Alex

On Mon, Jul 9, 2018 at 11:43 AM Christian Holler 
wrote:

> tl;dr If you run Linux with at least 16 GB of RAM and you would like to
> help making Firefox even more secure and stable, please consider joining
> the ASan Nightly Project by using a special Nightly browser [1][2]. Windows
> Users: We are actively working on Windows support and will send more
> communication once it is available.
>
>
> Hi everyone,
>
> despite all of our testing efforts, Firefox still sometimes crashes in the
> wild and some of these crashes even look like security problems (e.g.
> use-after-free or other memory corruptions). Unfortunately, in many of
> these cases, the data we have in our crash-stats are not actionable on
> their own (i.e. do not provide enough information for a developer to be
> able to find and fix the problem).
>
> In CI and fuzzing, we have been using AddressSanitizer (ASan), a
> compile-time instrumentation, very successfully for quite a while now. In
> particular the information it provides about use-after-free is much more
> actionable than a simple crash stack.
>
> In order to leverage the combined power of Nightly testing and ASan, we
> have come up with a special ASan Nightly build [1] that is equipped with a
> special ASan reporter addon. This addon is capable of collecting and
> reporting ASan errors back to us, once they are detected. We hope to find
> such errors in the wild and then use the ASan error report to identify and
> fix the problem, even though it might not be reproducible.
>
> Of course this approach comes with a drawback: While ASan’s performance is
> really good, its memory usage is significantly higher compared to a regular
> Nightly browser. Furthermore, ASan needs to retain free’d memory for a
> while in order to detect use-after-free on it. Hence, running such a build
> requires you to have enough RAM (at least 16 GB is recommended) and restart
> the browser once or twice a day.
>
> This project can only succeed if enough people are using it. So if you
> meet the requirements, I would be very happy if you joined the project [2].
>
> Thanks!
>
> - Chris (:decoder)
>
> [1]
> https://archive.mozilla.org/pub/firefox/nightly/latest-mozilla-central/firefox-63.0a1.en-US.linux-x86_64-asan-reporter.tar.bz2
>
> [2]
> https://developer.mozilla.org/en-US/docs/Mozilla/Testing/ASan_Nightly_Project
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: open socket and read file inside Webrtc

2018-07-05 Thread Alex Gaynor
Can you describe in a bit more detail what you're trying to accomplish?

As a general rule, the design of the sandbox is that the content process
shouldn't/can't access any system resources, and any resource you need
should be provided via IPC to the parent process.

alex

On Thu, Jul 5, 2018 at 11:43 AM angelo mantellini 
wrote:

> Alternatives?
>
> Il Mer 4 Lug 2018, 19:56 Eric Rescorla  ha scritto:
>
> > On Wed, Jul 4, 2018 at 5:24 AM,  wrote:
> >
> > > Hi,
> > > I'm very new with firefox (as developer, of course).
> > > I need to open a file and tcp sockets inside webrtc.
> > > I read the following link
> > > https://wiki.mozilla.org/Security/Sandbox#File_System_Restrictions
> > > there is the sandbox that does not permit to open sockets or file
> > > descriptors.
> > > could you give me the way how I can solve these my problems?
> > > Thank you very much
> > >
> >
> > There's no way to open raw sockets inside the web platform
> >
> > -Ekr
> >
> >
> > > Angelo
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CPOWs are now almost completely disabled

2018-06-28 Thread Alex Gaynor
Outstanding! I love a good IPC attack surface reduction!

Alex

On Wed, Jun 27, 2018 at 6:54 PM Tom Schuster  wrote:

> Since landing bug 1465911 [1], CPOWs [2] are only functional on our testing
> infrastructure. In normal builds that we ship to users CPOWs can be
> created, but no operations like property lookup can be performed on them.
>
> CPOWs continue to exist, because a lot of tests still depend on them. We
> can't disable CPOW creation in user builds, because the context menu passes
> them from the child to the parent and back like a token.
>
> This is a significant IPC attack surface reduction.
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1465911
> [2]
>
> https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Cross_Process_Object_Wrappers
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to move Activity Stream into its own process

2018-06-19 Thread Alex Gaynor
Do you have a sense of how this is going to be implemented? Is there going
to be specialized code for this, or is it going to be handled by all the
general navigation changes for process-switching when you change sites?

Alex

On Mon, Jun 18, 2018 at 5:02 PM Mike Conley  wrote:

> >
> > I am not sure about the specific AS work here, but for the general case,
> I
> > believe that Fission intends to cause a process switch anytime a tab
> > navigates to an unrelated origin, so this will also increase the
> frequency
> > of process switching.
> >
>
> Yep, this matches my understanding: we're going to be process flipping a
> lot as we move from Activity Stream -> Web Page, and also as we move from
> Web Page at Origin A to Web Page at Origin B. Our process flipping code is
> going to get exercised pretty hard, so we'll certainly want to surface and
> fix any serious problems there.
>
> If you have process-flipping bugs in mind that you think are particularly
> bad, I recommend getting them into the bug 1451850 tree somehow. I'll
> happily look over bugs with you (Gijs) if you have a list.
>
> -Mike
>
> On Mon, 18 Jun 2018 at 16:31, J. Ryan Stinnett  wrote:
>
> > On Mon, Jun 18, 2018 at 3:20 PM Gijs Kruitbosch <
> gijskruitbo...@gmail.com>
> > wrote:
> >
> > > If it *would* mean a process switch, we may want to take another look
> at
> > > some of the bugs relating to those (process switches are currently
> > > relatively rare) and re-prioritize them.
> > >
> >
> > I am not sure about the specific AS work here, but for the general case,
> I
> > believe that Fission intends to cause a process switch anytime a tab
> > navigates to an unrelated origin, so this will also increase the
> frequency
> > of process switching.
> >
> > So, it would seem that any issues around process switching would be good
> to
> > triage again, either because of the AS work or the more general Fission
> > plans.
> >
> > - Ryan
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: brace initialization syntax

2018-04-13 Thread Alex Gaynor
I don't have an opinion on the style change itself, but I'm a very strong
+1 on just picking one and making sure clang-format enforces it.

Alex

On Fri, Apr 13, 2018 at 9:37 AM, Emilio Cobos Álvarez 
wrote:

> Sorry, I know, coding style thread... But it's Friday and this is somewhat
> related to the previous thread.
>
> Bug 525063 added a lot of lines like:
>
> explicit TTextAttr(bool aGetRootValue)
>   : mGetRootValue(aGetRootValue)
>   , mIsDefined{ false }
>   , mIsRootDefined{ false }
> {
> }
>
> I think I find them hard to read and ugly.
>
> Those changes I assume were generated with clang-format /
> clang-format-diff using the "Mozilla" coding style, so I'd rather ask
> people to agree in whether we prefer that style or other in order to change
> that if needed.
>
> Would people agree to use:
>
>  , mIsRootDefined { false }
>
> Instead of:
>
>  , mIsRootDefined{ false }
>
> And:
>
>  , mFoo { }
>
> Instead of:
>
>  , mFoo{}
>
> ?
>
> I assume the same would be for variables, I find:
>
>   AutoRestore restore { mFoo };
>
> easier to read than:
>
>   AutoRestore restore{ mFoo };
>
> What's people's opinion on that? Would people be fine with a more general
> "spaces around braces" rule? I can't think of a case right now where I
> personally wouldn't prefer it.
>
> Also, we should probably state that consistency is preferred (I assume we
> generally agree on that), so in this case braces probably weren't even
> needed, or everything should've switched to them.
>
> Finally, while I'm here, regarding default member initialization, what's
> preferred?
>
>   uint32_t* mFoo = nullptr;
>
> Or:
>
>   uint32_t* mFoo { nullptr };
>
> I'm ambivalent, but brace syntax should cover more cases IIUC (that is,
> there are things that you can write with braces that you couldn't with
> equals I _think_).
>
> Should we state a preference? Or just say that both are allowed but
> consistency is better?
>
>  -- Emilio
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of "remote XUL"

2018-03-27 Thread Alex Gaynor
What was the original intended use case for remote XUL, powerful origins
controlled by Mozilla, or enabling developers to build their own powerful
origins?

Alex


On Tue, Mar 27, 2018 at 11:36 AM, Boris Zbarsky  wrote:

> Background: We currently have various provisions for "remote XUL", wherein
> a hostname is whitelisted to:
>
> 1)  Allow parsing XUL coming from that hostname (not normally alllowed for
> the web).
>
> 2)  Allow access to XPConnect-wrapped objects, assuming someone hands out
> such an object.
>
> 3)  Run XBL JS in the same global as the webpage.
>
> 4)  Allow access to a "cut down" Components object, which has
> Components.interfaces but not Components.classes, for example.
>
> This machinery is also used for the "dom.allow_XUL_XBL_for_file"
> preference.
>
> The question is what we want to do with this going forward.  From my point
> of view, I would like to eliminate item 4 above, to reduce complexity.  I
> would also like to eliminate item 2 above, because that would get us closer
> to having the invariant that XPConnect is only used from system
> principals.  These two changes are likely to break some remote XUL
> consumers.
>
> The question is whether we should just go ahead and disable remote XUL
> altogether, modulo its usage in our test suites and maybe
> "dom.allow_XUL_XBL_for_file" (for local testing).  It seems like that might
> be clearer than a dribble of breakage as we remove bits like items 2/4
> above, slowly remove various bindings, slowly remove support for some XUL
> tags, etc...
>
> Thoughts?  My gut feeling is that we should just turn off remote XUL
> outside the IsInAutomation() and maybe the "dom.allow_XUL_XBL_for_file"
> case.
>
> -Boris
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: sccache 0.2.6 released, contains fix for frequent hang in 0.2.5

2018-03-13 Thread Alex Gaynor
For macOS users this will hopefully be available from brew shortly:
https://github.com/Homebrew/homebrew-core/pull/25164

Alex

On Tue, Mar 13, 2018 at 9:21 AM, Ted Mielczarek  wrote:

> Hello,
>
> Yesterday I tagged and released sccache 0.2.6:
> https://github.com/mozilla/sccache/releases/tag/0.2.6
>
> This contains a fix for a hang that users were encountering with sccache
> 0.2.5 due to the make jobserver support added in that version. If you are
> using 0.2.5 you will want to update. If you were holding off on updating
> because of that bug, you should now be able to update without issues.
>
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: macOS sandbox filesystem write restrictions

2018-02-15 Thread Alex Gaynor
Hi all,

Small FYI: With bug 1405088 which landed yesterday, the macOS content
process sandbox no longer allows writing to files _anywhere_ on disk. Huge
thanks to the folks who helped out with landing the blockers!

Going forward if you need the content process to write something to disk,
the appropriate way to do this is to use IPC to request a file
descriptor/HANDLE from the parent process.

This restriction will eventually apply to all platforms, so even if you're
working on a Windows or Linux only feature, it's better to just plan for
this restriction.

For folks working on IPC, https://wiki.mozilla.org/Security/Sandbox/IPCguide
contains general guidance on how to write secure IPC methods to help
protect us against sandbox escapes.

Thanks!
Alex
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship - Support already-enrolled U2F devices with Google Accounts for Web Authentication

2018-01-30 Thread Alex Gaynor
Is it practical to be data driven about this? Either by telemetry on how
frequently this is used in Firefox, or by Google giving us data on how much
of their userbase is migrated? This has the benefit of either a) letting us
remove code sooner, or b) ensuring we're aware that we'd be breaking the
experience for a significant number of users.

Alex

On Tue, Jan 30, 2018 at 1:20 PM, J.C. Jones  wrote:

> My understanding is that the gstatic migration will take effect as soon as
> Google deploys Web Authentication. Re-enrolling devices will start some
> unspecified time after that.
>
> They are concerned about Google Accounts that are accessed using a U2F
> device very infrequently (once or twice per year) needing multiple
> opportunities to re-enroll, hence asking for the long period.
>
> If we choose a shorter period, the worst-case is some of those long-tail
> accounts would need to use Chrome to complete the login flow (presumably)
> rather than Firefox or Tor Browser. That is probably okay.
>
> So perhaps February 2020 instead?
>
> On Tue, Jan 30, 2018 at 11:12 AM, Eric Rescorla  wrote:
>
> >
> >
> > On Tue, Jan 30, 2018 at 8:49 AM, J.C. Jones  wrote:
> >
> >> Summary: Support already-enrolled U2F devices with Google Accounts for
> Web
> >> Authentication
> >>
> >> Web Authentication is on-track to ship in Firefox 60 [1], and contains
> >> within it support for already-deployed USB-connected FIDO U2F devices,
> and
> >> we intend to ship with a spec extension feature implemented to support
> >> devices that were already-enrolled using the older U2F Javascript API
> [2].
> >> That feature depends on Firefox supporting the older API’s algorithm for
> >> relaxing the same-origin policy [3] which is not completely implemented
> in
> >> Firefox [4].
> >>
> >> It appears that many U2F JS API-compatible websites do not require the
> >> cross-origin features currently unimplemented in Firefox, but notably
> the
> >> Google Accounts service does: For historical reasons (being the first
> U2F
> >> implementor) their FIDO App ID  is “www.gstatic.com” [5] for logins to
> “
> >> google.com” and its subdomains [6]. Interestingly, as the links to
> >> Chromium’s source code in [5] and [6] show, Chrome chooses to hardcode
> the
> >> approval of this same-origin override rather than complete the
> >> specification’s algorithm for this domain.
> >>
> >> As mentioned in the bug linked in [4], I have a variety of reservations
> >> with the U2F Javascript API’s algorithm. I also recognize that Google
> >> Accounts is the largest player in existing U2F device enrollments. The
> >> purpose of the extension feature in [2] is to permit users who already
> are
> >> using U2F devices to be able to move seamlessly to Web Authentication --
> >> and hopefully also be able to use browsers other than Chrome to do it.
> >>
> >> After discussions with appropriate Googlers confirmed that the “
> >> www.gstatic.com” origin used in U2F is being retired as part of their
> >> change-over to Web Authentication, I propose to hard-code support in
> Gecko
> >> to permit Google Accounts’ cross-origin U2F behavior, the same way as
> >> Chrome has. I propose to do this for a period of 5 years, until 2023,
> and
> >>
> >
> > Five years seems very long to keep this around. 1-2 seems a lot more
> > appropriate. When is the gstatic migration goingt o be complete?
> >
> > -Ekr
> >
> >
> >> to file a bug to remove this code around that date. That would give even
> >> periodically-used U2F-protected Google accounts ample opportunity to
> >> re-enroll their U2F tokens with the new Web Authentication standard and
> >> provide continuity-of-access. The code involved would be a small search
> >> loop, similar to Chrome’s in [6].
> >>
> >> If we choose not to do this, Google Accounts users who currently have
> U2F
> >> enabled will not be able to authenticate using Firefox until their
> >> existing
> >> U2F tokens are re-enrolled using Web Authentication -- meaning not only
> >> will Google need to change to the Web Authentication API, they will also
> >> have to prompt users to go back through the enrollment ceremony. This
> >> process is likely to take several years.
> >>
> >> Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=webauthn
> >>
> >> Spec: https://www.w3.org/TR/webauthn/
> >>
> >> Estimated target release: 60
> >>
> >> Preference behind which this is implemented:
> >> security.webauth.webauthn
> >>
> >> DevTools support:
> >> N/A
> >>
> >> Support by other browser engines:
> >> - Blink: In-progress
> >> - Edge: In-progress
> >> - Webkit: No public announcements
> >>
> >> Testing:
> >> Mochitests in-tree; https://webauthn.io/; https://webauthn.bin.coffee/;
> >> https://webauthndemo.appspot.com/; Web Platform Tests in-progress
> >>
> >>
> >> Cheers,
> >> J.C. Jones and Tim Taubert
> >>
> >> [1]
> >> https://groups.google.com/d/msg/mozilla.dev.platform/tsevyqf
> >> BHLE/lccldWNNBwAJ
> >>
> 

Re: overly strict eslint rules

2018-01-03 Thread Alex Gaynor
On Wed, Jan 3, 2018 at 4:43 AM, Mark Banner  wrote:

> On 24/12/2017 19:41, Ben Kelly wrote:
>
>> But I also see rules about cosmetic things like what kind of quotes must
>> be
>> used for strings.
>> AFAICT this kind of rule does not have any tangible safety benefit.  Its
>> purely a cosmetic style choice.  I don't see why we should bounce patches
>> out of the tree if the author and reviewer of a component prefer to use
>> single quotes instead of double quotes in a file.
>>
> As Jonathan already mentioned, the stylistic rules are designed to help
> enforce a consistent style, and reduce code review cycles to address review
> "nits" - benefiting both the submitter and reviewer.
>
> This also helps newcomers find and pick up the consist style quickly,
> rather than having to "examine the files in the component and work out the
> style that's used most" which is what we've had in the past.
>
>
It's also of tremendous value to those of us whose work on Firefox requires
them to interact with many different components; consistency across the
codebase is a huge boon to our ability to dive into new things as they're
needed.

Alex
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treating object subrequests as mixed active content

2017-11-27 Thread Alex Gaynor
How does this behavior compare with other browsers?

Alex

On Mon, Nov 27, 2017 at 7:47 AM, Jonathan Kingston  wrote:

> Currently our mixed content blocker implementation treats object
> subrequests as mixed passive content. As part of our plan to deprecate
> insecure connections we are going to block insecure subrequests in flash.
> Mostly because such subrequests can contain data or functionality which
> might be dangerous for end users.
>
> Current telemetry suggest that ~0.03% requests would be impacted by this
> change of behaviour [1]. To roll that change out we initially are going to
> add a pref  "security.mixed_content.block_object_subrequest" which will be
> enabled for Nightly and Early Beta and ultimately will be flipped on
> permanently for FF60.
>
> We track overall progress here:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1190623
>
> Thanks
>
> Jonathan
>
> [1]
> https://telemetry.mozilla.org/new-pipeline/dist.html#!
> cumulative=0_date=2017-11-15=__none__!__none__!__
> none___channel_version=release%252F57=MIXED_
> CONTENT_OBJECT_SUBREQUEST_channel_version=null&
> processType=*=Firefox=1_keys=
> submissions_date=2017-11-12=0=1_submission_date=0
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-30 Thread Alex Gaynor
I don't know about C++14 specifically, but a good example is C++17's
std::string_view, which allows an implicit cast from std::string&& and can
very easily lead to UAF:
https://github.com/isocpp/CppCoreGuidelines/issues/1038

Alex

On Mon, Oct 30, 2017 at 10:52 AM, Simon Sapin  wrote:

> On 30/10/17 15:05, smaug wrote:
>
>> And let's be careful with the new C++ features, pretty please. We
>> managed to not be careful when we started to use auto, or ranged-for
>> or lambdas. I'd prefer to not fix more security critical bugs or
>> memory leaks just because of fancy hip and cool language features ;)
>>
>
> Careful how? How do new language features lead to security bugs? Is new
> compiler code not as well tested and could have miscompiles? Are specific
> features easy to misuse?
>
> --
> Simon Sapin
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: Blocking Top-level Navigations to a data: URI

2017-09-15 Thread Alex Gaynor
You read my mind -- thanks!

Alex

On Fri, Sep 15, 2017 at 1:16 PM, Christoph Kerschbaumer <ckers...@gmail.com>
wrote:

>
> On Sep 15, 2017, at 7:14 PM, Alex Gaynor <agay...@mozilla.com> wrote:
>
> Hi Christoph,
>
> Great stuff!
>
> Are external applications able to trigger loads of data:, e.g. a desktop
> mail application, via the OS protocol handler facilities?
>
>
> Sorry I forgot to mention that explicitly. Since scammers mostly trick
> users by sending emails, those navigations to data: URIs will also be
> blocked.
>
> Alex
>
> On Fri, Sep 15, 2017 at 1:08 PM, Christoph Kerschbaumer <ckerschb@gmail.
> com> wrote:
>
>> Hey Everyone,
>>
>> we plan to prevent web pages from navigating the top-level window to a
>> data: URI. Historically data: URIs caused confusion for end users; mostly
>> because end users are not aware that data: URIs can encode untrusted
>> content into a URL. The fact that data: URIs can execute JavaScript makes
>> them popular amongst scammers for spoofing and phishing attacks.
>>
>> To mitigate that risk we installed a pref 
>> (“security.data_uri.block_toplevel_data_uri_navigations”)
>> which blocks all top-level navigations to a data: URI. We plan to flip that
>> pref in Nightly using “ifdef EARLY_BETA_OR_EARLIER”. In a few weeks we will
>> evaluate whether we can flip on that change in behavior for FF57 or whether
>> we are going to wait to ship that change in behavior till FF58.
>>
>> In more detail, the following cases will be:
>> BLOCKED:
>>  * Navigating to a new top-level data: URI document using:
>>- window.open("data:...");
>>- window.location = "data:..."
>>- clicking > etc).
>>  * Redirecting to a new top-level data: URI document using:
>>- 302 redirects to "data:..."
>>- meta refresh to "data:..."
>>
>> ALLOWED:
>>  * User explicitly entering/pasting "data:..." into the URL bar
>>  * Opening "data:image/*" in top-level window, unless it's
>> "data:image/svg+xml"
>>  * Opening “data:application/pdf” in top-level window
>>  * Downloading a data: URI, e.g. 'save-link-as' of "data:..."
>>
>> Our telemetry indicates that Firefox would have blocked 0.01% of all
>> loads in 55 release. It’s unfortunate that the permalink [1] for
>> DOCUMENT_DATA_URI_LOADS stopped working today, so you have to take my word
>> for it. To be fair, those telemetry numbers include all top-level data: URI
>> navigations. Recently we have refined our blocking mechanism and
>> deactivated blocking data:image/* loads as well as data:application/pdf, so
>> we expect the blockage number to be even smaller.
>>
>> Please note that IE/Edge never supported data: URI navigations [2].
>> Chrome started to print a deprecation warning for top-level data: URI
>> navigations within M57 and started to block such navigations within M60.
>>
>> Overall progress of the project will be tracked here:
>>   https://bugzilla.mozilla.org/show_bug.cgi?id=1380959 > ttps://bugzilla.mozilla.org/show_bug.cgi?id=1380959>
>>
>> Thanks,
>>  Christoph
>>
>> [1] https://mzl.la/2x5pGRX <https://mzl.la/2x5pGRX>
>> [2] https://msdn.microsoft.com/en-us/library/cc848897.aspx <
>> https://msdn.microsoft.com/en-us/library/cc848897.aspx>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: Blocking Top-level Navigations to a data: URI

2017-09-15 Thread Alex Gaynor
Hi Christoph,

Great stuff!

Are external applications able to trigger loads of data:, e.g. a desktop
mail application, via the OS protocol handler facilities?

Alex

On Fri, Sep 15, 2017 at 1:08 PM, Christoph Kerschbaumer 
wrote:

> Hey Everyone,
>
> we plan to prevent web pages from navigating the top-level window to a
> data: URI. Historically data: URIs caused confusion for end users; mostly
> because end users are not aware that data: URIs can encode untrusted
> content into a URL. The fact that data: URIs can execute JavaScript makes
> them popular amongst scammers for spoofing and phishing attacks.
>
> To mitigate that risk we installed a pref (“security.data_uri.block_
> toplevel_data_uri_navigations”) which blocks all top-level navigations to
> a data: URI. We plan to flip that pref in Nightly using “ifdef
> EARLY_BETA_OR_EARLIER”. In a few weeks we will evaluate whether we can flip
> on that change in behavior for FF57 or whether we are going to wait to ship
> that change in behavior till FF58.
>
> In more detail, the following cases will be:
> BLOCKED:
>  * Navigating to a new top-level data: URI document using:
>- window.open("data:...");
>- window.location = "data:..."
>- clicking  etc).
>  * Redirecting to a new top-level data: URI document using:
>- 302 redirects to "data:..."
>- meta refresh to "data:..."
>
> ALLOWED:
>  * User explicitly entering/pasting "data:..." into the URL bar
>  * Opening "data:image/*" in top-level window, unless it's
> "data:image/svg+xml"
>  * Opening “data:application/pdf” in top-level window
>  * Downloading a data: URI, e.g. 'save-link-as' of "data:..."
>
> Our telemetry indicates that Firefox would have blocked 0.01% of all loads
> in 55 release. It’s unfortunate that the permalink [1] for
> DOCUMENT_DATA_URI_LOADS stopped working today, so you have to take my word
> for it. To be fair, those telemetry numbers include all top-level data: URI
> navigations. Recently we have refined our blocking mechanism and
> deactivated blocking data:image/* loads as well as data:application/pdf, so
> we expect the blockage number to be even smaller.
>
> Please note that IE/Edge never supported data: URI navigations [2]. Chrome
> started to print a deprecation warning for top-level data: URI navigations
> within M57 and started to block such navigations within M60.
>
> Overall progress of the project will be tracked here:
>   https://bugzilla.mozilla.org/show_bug.cgi?id=1380959 <
> https://bugzilla.mozilla.org/show_bug.cgi?id=1380959>
>
> Thanks,
>  Christoph
>
> [1] https://mzl.la/2x5pGRX 
> [2] https://msdn.microsoft.com/en-us/library/cc848897.aspx <
> https://msdn.microsoft.com/en-us/library/cc848897.aspx>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: sccache as ccache

2017-07-26 Thread Alex Gaynor
If you're on macOS, you can also get sccache with `brew install sccache`.

Alex

On Wed, Jul 26, 2017 at 9:05 AM, Ted Mielczarek  wrote:

> Yesterday I published sccache 0.2 to crates.io, so you can now `cargo
> install sccache` and get the latest version (it'll install to
> ~/.cargo/bin). If you build Firefox on Linux or OS X you can (and
> should) use sccache in place of ccache for local development. It's as
> simple as adding this to your mozconfig (assuming sccache is in your
> $PATH):
>
>   ac_add_options --with-ccache=sccache
>
> The major benefit you gain over ccache is that sccache can cache Rust
> compilation as well, and the amount of Rust code we're adding to Firefox
> is growing quickly. (We're on track to enable building Stylo by default
> soon, which will add quite a bit of Rust.)
>
> On my several-year-old Linux machine (Intel(R) Core(TM) i7-3770 CPU @
> 3.40GHz, 32GB, SSD), if I build; clobber; build with sccache enabled the
> second (fully-cached) build completes in just over 4 minutes:
>
>   4:11.92 Overall system resources - Wall time: 252s; CPU: 69%; Read
>   bytes: 491520; Write bytes: 6626512896; Read time: 60; Write time:
>   1674852
>
> sccache still isn't completely straightforward to use on Windows[1] but
> I aim to fix that this quarter so that using it there will be just as
> simple as on other platforms.
>
> -Ted
>
> 1. https://bugzilla.mozilla.org/show_bug.cgi?id=1318370
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Enabling filesystem read-restrictions for content process sandbox

2017-07-06 Thread Alex Gaynor
Hi dev-platform,

On behalf of the Runtime Content Isolation (aka sandboxing) team, I'm
delighted
to announce that starting later this week, our macOS and Windows nightly
builds
will prohibit read access to most of the filesystem in the content process!

What does this mean for you? First and foremost, a more secure browser!
Second,
it means that if you see bugs, please report them, our goal is to put this
on
the trains for 56! If you run into anything, please file it as a blocker for
https://bugzilla.mozilla.org/show_bug.cgi?id=1377522 .

Finally, it means that in code you're writing, you should not expect to be
able
to read from the filesystem in the content process -- with the exception of
inside the .app bundle, or in the chrome/ subdirectory of the profile
directory.

If you need access to a file in content, you should plan on remoting that
to the
parent process. When designing these APIs, please be careful to ensure the
parent process is able to perform appropriate permissions checks such that
the
IPC mechanism isn't able to bypass the sandbox's goal of preventing a
malicious
content process from accessing the entire file system.

This represents the culmination of a lot of work by a lot of folks, both on
our
team and on many other teams who helped out with refactoring their code --
thank
you!

We're looking forward to also shipping this for Linux soon.

Cheers,
Alex
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mochitest on packaged builds with the sandbox

2017-05-10 Thread Alex Gaynor
Hey Kip,

We should probably move this to a different thread

The parent process has full privileges, so it can read whatever it wants.
As long as we express the IPC in a way that the parent can enforce rules
about filesystem access, basically so the child process doesn't use the IPC
as a generic "I can access _all_ the files now", I don't see a problem with
remoting these access. This lets us avoid needing to maintain more
variations in sandbox rules.

Alex

On Tue, May 9, 2017 at 7:08 PM, Kearwood Kip Gilbert <kgilb...@mozilla.com>
wrote:

> +cc Diego, who has experimented with packaging Firefox on Steam...
>
>
>
> Thanks Alex,
>
>
>
> It sounds as though we can work around all of these issues, based on your
> feedback.  I believe that we could proxy any file access needed by the
> content process.  The main process will need to access some resources in
> these builds that are not normally accessed by the normal Firefox builds
> for the Steam integration.  Perhaps opening these resources isn’t too much
> concern for the regular Firefox builds.  If not, we could perhaps select
> these alternate sandboxing rules with a command-line parameter?
>
>
>
> We have also explored treating this “Steam VR Browser” as a separate
> product, and using compile-time flags to select the desired code to build
> and assets.  We could still hard-code the rules, but skip them at build
> time for regular Firefox builds.  In this case, we are effectively forking
> the “browser” directory into a “vrbrowser” and stripping it down to just
> what we need.
>
>
>
> We liked the re-pack option (ie. Similar to QBRT), as we don’t have to put
> as much load on our test infrastructure; however, this is still an option
> on the table if a dedicated build and the associated costs is justified.
>
>
>
> Cheers,
>
>- Kearwood “Kip” Gilbert
>
>
>
> *From: *Alex Gaynor <agay...@mozilla.com>
> *Sent: *May 9, 2017 7:58 AM
> *To: *Kearwood Kip Gilbert <kgilb...@mozilla.com>
> *Cc: *dev-platform@lists.mozilla.org
> *Subject: *Re: Running mochitest on packaged builds with the sandbox
>
>
>
> Hi,
>
> Hmm, VR appears to be an interesting challenge.
>
> To expand a bit more on why mochitest+sandboxing is a challenge for
> packaged builds: The way mochitest is set up is that there's a
> configuration file which points to JS files to be loaded for tests. These
> are loaded by the content process. This works ok in non-packaged builds,
> because in those builds we allow read access to the entire source directory
> ("topsrcdir"); in packaged builds, we don't have this whitelist -- we only
> allow read access inside of the .app (to use the macOS lingo), so
> essentially content is trying to open a random JS file, and is blocked.
>
> With that context, disabling sandboxing might be one option, another is
> for your repack to include the mochitest JS files inside the packaged
> build, then everything can work ok. We don't want to do this for general
> packaged builds because there's no reason for SuperPowers.js (for example)
> to be in a shipped FF, but if you're doing a special pack for testing it
> might make sense.
>
> Does that make sense?
>
> For your other question, about configuration of sandboxing rules. Right
> now we have the ability to have multiple different sets of sandbox rules,
> for example plugin processes and content processes have different sandbox
> rules, and so will GPU processes soon. With that said, it sounds like what
> you're talking about is really in the content process -- for that case,
> you're really better off remoting access to these resources through the
> parent process. This keeps us from expanding the attack surface in content
> (which is most exposed to the dangerous web), right now we're doing all we
> can to restrict this, so I wouldn't be excited about opening it back up :-)
>
> Cheers,
>
> Alex
>
>
>
> On Mon, May 8, 2017 at 2:14 PM, Kearwood Kip Gilbert <kgilb...@mozilla.com>
> wrote:
>
> Hi all,
>
>
>
> The VR team is working on a Steam packaged browser with a VR specific UI
> and richer VR experience.  In order to prevent the overhead of having the
> VR specific assets included in every Firefox build while still running on
> tested executables, we will be doing a repack build.
>
>
>
> WebVR will still continue to be supported in the regular Firefox builds;
> API surface area is the same in normal desktop builds.  Mochitests
> validating these API calls should be unaffected.
>
>
>
> We will need a means for testing the VR frontend and assets that are added
> with the re-pack.  Ideally, we could use the existing test mechanisms,
> including m

Re: Running mochitest on packaged builds with the sandbox

2017-05-10 Thread Alex Gaynor
After some follow up investigations, it turns out the MacOS tests also rely
on this, we just hadn't noticed due to a different bug :-(

Short version: Sorry for wasting your time, you can ignore this thread.

All the other feedback we've gotten about general sandbox debugging will
still be considered :-)

Cheers,
Alex

On Tue, May 9, 2017 at 2:25 PM, Gian-Carlo Pascutto <g...@mozilla.com> wrote:

> On 08-05-17 19:26, Alex Gaynor wrote:
> > Hi dev-platform,
> >
> > Top-line question: Do you rely on being able to run mochitests from a
> > packaged build (`--appname`)?
>
> It seems our Linux tests do, actually:
>
> https://treeherder.mozilla.org/logviewer.html#?job_id=97391302=try
>
> "test-linux32/opt-mochitest-e10s-1 tc-M-e10s(1)" launches
>
> /home/worker/workspace/build/tests/mochitest/runtests.py --total-chunks
> 10 --this-chunk 1
> --appname=/home/worker/workspace/build/application/firefox/firefox
> --utility-path=tests/bin --extra-profile-file=tests/bin/plugins
>
> Using --appname. As expected, this then fails because the sandbox didn't
> get the exception added and correctly blocks the reading of some test
> files that are in a random place:
>
> /home/worker/workspace/build/tests/mochitest/extensions/
> specialpowers/chrome/specialpowers/content/MozillaLogger.js
>
> We may need another solution here.
>
> --
> GCP
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mochitest on packaged builds with the sandbox

2017-05-09 Thread Alex Gaynor
Hi Ehsan,

If we want to dig deeper, let's fork off another thread, but it sounds like
there's two action items here:

1) Fix https://bugzilla.mozilla.org/show_bug.cgi?id=1345046
2) Better document how to disable the sandbox for debugging -- where would
you expect to find docs on this, https://wiki.mozilla.org/Security/Sandbox,
somewhere else?

Cheers,
Alex

On Tue, May 9, 2017 at 10:49 AM, Ehsan Akhgari <ehsan.akhg...@gmail.com>
wrote:

> Hi Alex,
>
> Apologies for hijacking the thread, but since you asked, right now
> debugging mochitest that you want to get some logging out of with a
> sandboxed content process is super painful.  I last hit it when I was
> debugging a memory leak which typically requires getting refcount leak logs
> and it took me quite a while to find the wiki page that describes the pref
> that I needed to set in order to turn off the sandbox so that any logging
> in the content process would be able to write to a log file (and I couldn't
> even find it again to include a link to the wiki page here once again!).
>
> I thought I'd mention it since you were asking about stuff that can be
> painful when debugging test failures with sandboxed content processes.  :-)
>
> Thanks,
>
> Ehsan
>
>
>
> On 05/08/2017 01:26 PM, Alex Gaynor wrote:
>
>> Hi dev-platform,
>>
>> Top-line question: Do you rely on being able to run mochitests from a
>> packaged build (`--appname`)?
>>
>> Context:
>>
>> The sandboxing team has been hard at work making the content process
>> sandbox as restrictive as possible. Our latest focus is  removing file
>> read
>> permissions from content processes -- the sandbox's value is pretty
>> limited
>> if a compromised content process can ship all your files off by itself!
>>
>> One of the things we've discovered in the process of developing these
>> patches is that they break running mochitest on packaged firefox builds
>> (this is the `--appname` flag to mochitest)! `try` doesn't appear to use
>> this, and none of us use it in our development workflows, but we wanted to
>> check in with dev-platform and see if we were going to be breaking
>> people's
>> development flows! While these restrictions are not on by default yet,
>> once
>> they are you'd only be able to run tests on packaged builds by disabling
>> the sandbox. If this is a fundamental part of lots of folks' workflows
>> we'll dig into whether there's a way to keep this working.
>>
>> Happy Monday!
>> Alex
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mochitest on packaged builds with the sandbox

2017-05-09 Thread Alex Gaynor
Hi,

Hmm, VR appears to be an interesting challenge.

To expand a bit more on why mochitest+sandboxing is a challenge for
packaged builds: The way mochitest is set up is that there's a
configuration file which points to JS files to be loaded for tests. These
are loaded by the content process. This works ok in non-packaged builds,
because in those builds we allow read access to the entire source directory
("topsrcdir"); in packaged builds, we don't have this whitelist -- we only
allow read access inside of the .app (to use the macOS lingo), so
essentially content is trying to open a random JS file, and is blocked.

With that context, disabling sandboxing might be one option, another is for
your repack to include the mochitest JS files inside the packaged build,
then everything can work ok. We don't want to do this for general packaged
builds because there's no reason for SuperPowers.js (for example) to be in
a shipped FF, but if you're doing a special pack for testing it might make
sense.

Does that make sense?

For your other question, about configuration of sandboxing rules. Right now
we have the ability to have multiple different sets of sandbox rules, for
example plugin processes and content processes have different sandbox
rules, and so will GPU processes soon. With that said, it sounds like what
you're talking about is really in the content process -- for that case,
you're really better off remoting access to these resources through the
parent process. This keeps us from expanding the attack surface in content
(which is most exposed to the dangerous web), right now we're doing all we
can to restrict this, so I wouldn't be excited about opening it back up :-)

Cheers,
Alex

On Mon, May 8, 2017 at 2:14 PM, Kearwood Kip Gilbert <kgilb...@mozilla.com>
wrote:

> Hi all,
>
>
>
> The VR team is working on a Steam packaged browser with a VR specific UI
> and richer VR experience.  In order to prevent the overhead of having the
> VR specific assets included in every Firefox build while still running on
> tested executables, we will be doing a repack build.
>
>
>
> WebVR will still continue to be supported in the regular Firefox builds;
> API surface area is the same in normal desktop builds.  Mochitests
> validating these API calls should be unaffected.
>
>
>
> We will need a means for testing the VR frontend and assets that are added
> with the re-pack.  Ideally, we could use the existing test mechanisms,
> including mochitests.
>
>
>
> Perhaps we could disable the sandbox for these front-end tests?
>
>
>
> The Steam packaged builds will also need slightly expanded access to
> resources such as files, registry, and pipes required for communication
> with Steam.
>
>
>
> Are there any plans to make the sandboxing rules configurable at runtime?
>
>
>
> Cheers,
>
>- Kearwood “Kip” Gilbert
>
>
>
>
>
> *From: *Alex Gaynor <agay...@mozilla.com>
> *Sent: *May 8, 2017 10:26 AM
> *To: *dev-platform@lists.mozilla.org
> *Subject: *Running mochitest on packaged builds with the sandbox
>
>
>
> Hi dev-platform,
>
>
>
> Top-line question: Do you rely on being able to run mochitests from a
>
> packaged build (`--appname`)?
>
>
>
> Context:
>
>
>
> The sandboxing team has been hard at work making the content process
>
> sandbox as restrictive as possible. Our latest focus is  removing file read
>
> permissions from content processes -- the sandbox's value is pretty limited
>
> if a compromised content process can ship all your files off by itself!
>
>
>
> One of the things we've discovered in the process of developing these
>
> patches is that they break running mochitest on packaged firefox builds
>
> (this is the `--appname` flag to mochitest)! `try` doesn't appear to use
>
> this, and none of us use it in our development workflows, but we wanted to
>
> check in with dev-platform and see if we were going to be breaking people's
>
> development flows! While these restrictions are not on by default yet, once
>
> they are you'd only be able to run tests on packaged builds by disabling
>
> the sandbox. If this is a fundamental part of lots of folks' workflows
>
> we'll dig into whether there's a way to keep this working.
>
>
>
> Happy Monday!
>
> Alex
>
> ___
>
> dev-platform mailing list
>
> dev-platform@lists.mozilla.org
>
> https://lists.mozilla.org/listinfo/dev-platform
>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Running mochitest on packaged builds with the sandbox

2017-05-08 Thread Alex Gaynor
Hi dev-platform,

Top-line question: Do you rely on being able to run mochitests from a
packaged build (`--appname`)?

Context:

The sandboxing team has been hard at work making the content process
sandbox as restrictive as possible. Our latest focus is  removing file read
permissions from content processes -- the sandbox's value is pretty limited
if a compromised content process can ship all your files off by itself!

One of the things we've discovered in the process of developing these
patches is that they break running mochitest on packaged firefox builds
(this is the `--appname` flag to mochitest)! `try` doesn't appear to use
this, and none of us use it in our development workflows, but we wanted to
check in with dev-platform and see if we were going to be breaking people's
development flows! While these restrictions are not on by default yet, once
they are you'd only be able to run tests on packaged builds by disabling
the sandbox. If this is a fundamental part of lots of folks' workflows
we'll dig into whether there's a way to keep this working.

Happy Monday!
Alex
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform