Re: what if to not to give firefox sys_admin capability with apparmor?

2020-02-14 Thread Jed Davis
On Monday, February 10, 2020 at 11:14:26 AM UTC-7, gcpas...@gmail.com wrote:
> IIRC CAP_SYS_ADMIN is needed to install seccomp-bpf filters.

We don't need capabilities for seccomp-bpf.

We do need capabilities for anything namespace-related: chroot()ing to a 
deleted directory to revoke filesystem access, as mentioned, as well as 
creating isolated namespaces for networking and SysV IPC (and at some point in 
the future also process IDs).  These form an additional layer of protection.

But there seems to be a significant misunderstanding in that forum thread: the 
capabilities that Firefox uses are within the scope of an unprivileged user 
namespace, as explained in the user_namespaces(7) man page.  Allowing these 
capabilities may expose additional kernel attack surface, but in the absence of 
exploitable kernel bugs they don't increase the set of resources that can be 
accessed.  For example, CAP_SYS_CHROOT in this context doesn't allow tricking 
setuid root executables into doing anything, because it's restricted to a user 
namespace where the real root user doesn't exist and setuid execution doesn't 
work.

Unfortunately, AppArmor doesn't seem to distinguish between these limited 
capabilities and the "real" capabilities in the initial user namespace, at 
least not by default.

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate - linux32 tests starting with Firefox 69

2019-04-08 Thread Jed Davis
jma...@mozilla.com writes:

> As our next ESR is upcoming, I would like to turn off linux32 on
> Firefox 69 and let it ride the trains and stay on 68 ESR.  This will
> allow builds/tests to be supported with security updates into 2021.

Does this mean that Linux on 32-bit x86 is being demoted to Tier 3, like
(non-Android) Linux on other uncommon architectures?  Have we identified
a maintainer to take responsibility for testing and working with us to
fix regressions?  I don't see how it can remain Tier 1 without CI coverage.

Also, will it still be possible to explicitly request linux32 on Try
runs?  Our cross-building story is not good, and being able to use Try
for this helpful for avoiding regressions and testing changes to
arch-dependent code.

For context, I'm the module owner for Linux sandboxing, and we have to
deal with low-level details of the system call ABI, and as a result
there is nontrivial code used only for linux32 that I'm responsible for.

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Performance profiling improvements #3

2018-10-31 Thread Jed Davis
Mike Hommey  writes:

> On Mon, Oct 22, 2018 at 02:20:32PM -0700, Panos Astithas wrote:
>> To record a profile with the ‘perf’ command run the
>> following commands and then load the firefox.symbol.data output file from
>> https://perf-html.io:
>> > sudo perf record -g -F 999 -p 
>> > sudo perf script -F +pid > firefox.symbol.data
>
> sudo shouldn't be necessary in either case.

Changing the `kernel.perf_event_paranoid` sysctl (documented in the
perf_event_open(2) man page; see also sysctl(8) and /etc/sysctl.conf to
change it persistently) may be necessary to use perf as non-root.

Also, Jeff Muizelaar  writes:

> I think sudo will let you have symbolicated kernel stacks which can be handy.

Lowering kernel.perf_event_paranoid to 1 or less seems to cover this:
empirically on Fedora, the addresses in /proc/kallsyms read as "(null)"
if it's set to 2 but give actual addresses if it's lower.  (This makes
sense: if you can capture kernel stacks with perf then you can easily
break KASLR, so there's no point in redacting the symbol table.)

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling (many) assertions in opt builds locally and eventually Nightly

2018-09-20 Thread Jed Davis
Cameron McCormack  writes:

> (I wonder if we could collect all the same data, and use the same
> crash reporting infrastructure, for non-crashing crash reports like
> this.)

For what it's worth, I've done something very close to this
*accidentally*, on Linux, by manually sending a crash signal to test
crash reporting and hitting bugs in how the signal is re-raised, such
that it just returns to the not-really-crashed code.  (It does unhook
the signal handlers, so the next crash will kill the process, but that
could be changed.)

I never tried to make it an actual feature -- I had concerns about rate
limiting, and maybe some other things I've forgotten, and then the work
that's been mentioned here about telemetry seemed like it was solving
the same problem in a more generally useful way.

It would have been useful for desktop Linux sandboxing, because there
are cases where we don't need to crash, and want to avoid crashing if
possible on release, but we'll likely need at least a crash report's
worth of info to figure out what's going on; instead we settled for
crashing only on Nightly.

--Jed

[1] 
https://searchfox.org/mozilla-central/rev/0b8ed772d24605d7cb44c1af6d59e4ca023bd5f5/toolkit/crashreporter/breakpad-client/linux/handler/exception_handler.cc#398
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-06 Thread Jed Davis
Ted Mielczarek <t...@mielczarek.org> writes:

> On Tue, Jul 5, 2016, at 11:18 PM, Jed Davis wrote:
>> (However, there aren't automated
>> tests to ensure it keeps working; "crashing the content process" isn't a
>> use case that the test framework docs were very helpful with.)
>
> FYI, a number of mochitest-browser-chrome tests have since been written
> that test crashing the content process:
> https://dxr.mozilla.org/mozilla-central/search?q=path%3Abrowser_*crash=true

That would explain it -- originally only B2G could use any of this, and
on B2G there was no browser-chrome, only mochitest-plain (and reftests).

> There's even a BrowserTestUtils.crashBrowser function now:
> https://dxr.mozilla.org/mozilla-central/rev/70e05c6832e831374604ac3ce7433971368dffe0/testing/mochitest/BrowserTestUtils/BrowserTestUtils.jsm#758
>
> That just does a simple near-null deref using ctypes, but it could
> easily be expanded to be more useful.

Thanks for the pointers; filed bug 1285055.

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Jed Davis
Steve Fink  writes:

> On 07/05/2016 01:33 AM, Julian Hector wrote:
>> If you encounter a crash that may be due to seccomp, please file a bug in
>> bugzilla and block Bug 1280415, we use it to track issues experienced on
>> nightly.
>
> What would such a crash look like? Do they boil down to some system
> call returning EPERM?

The relatively short version: it raises SIGSYS, and the signal handler
can take arbitrary actions (e.g., polyfilling open() to message an
out-of-process broker instead), but the default is currently to log a
message to stderr, invoke the crash reporter[*], and terminate the
process.

--Jed

[*] Also dumps the C stack directly if the crash reporter isn't
available, and the JS stack in either case; both of these are unsafe if
the syscall was in async signal context or had important locks held, but
you were crashing anyway.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Jed Davis
Benjamin Smedberg  writes:

> Assuming these crashes show up in crash-stats.mozilla.com, are there
> particular signatures, metadata, or other patterns that would let us say
> "this crash is caused by a sandbox failure"?

They should, and the expected distinguishing feature is a "Crash Reason"
of "SIGSYS".  I put a certain amount of work into getting the crash
reporting integration to work properly back in the B2G era, and on
desktop for media plugin processes.  (However, there aren't automated
tests to ensure it keeps working; "crashing the content process" isn't a
use case that the test framework docs were very helpful with.)

For additional filtering/faceting, the "crash address" is the syscall
number (but note that its meaning depends on architecture).

One more thing: it might be a good idea to reconsider the
crash-by-default policy for desktop content, at least for release
builds.  On B2G, the decision to do that was informed by the assumption
that builds would be comprehensively tested before being released to end
users, that the presence of crashes under test would block release, and
that this was all running in a relatively fixed environment.
Approximately none of that is true on desktop, especially the last item;
it seems to have worked out for media plugins, but they're much more
self-contained than content, and I don't know how widely used they are.
And there are other options for reporting diagnostic info which trade
off detail for hopefully not crashing.  It looks like there was never a
bug about this, which I guess means I should file one

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-04-27 Thread Jed Davis
Kyle Huey  writes:

> Can we catch this pattern with a compiler somehow?
>
> Foo foo;
> foo.x = thing;
> DoBar(mozilla::Move(foo));
> if (foo.x) { /* do stuff */ }

https://bugzilla.mozilla.org/show_bug.cgi?id=1186706
("Consider static analysis for Move semantics")

There are patches on the bug, but it looks like they needed a little
more work to be landable.

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Requiring a try job prior to autolanding to inbound

2016-01-28 Thread Jed Davis
Adam Roach  writes:

> My understanding is that the autolander is available only to
> developers with Level 3 access, right? Given that this is the same
> group of people who can do a manual check-in, I don't see why we would
> make autolanding have to clear a higher bar than manual landing.

We could allow people to override the requirement while still letting
the default UX be to press a button and have the Try run and landing
taken care of automatically.  (This is in my fantasy world where a Try
run doesn't mean squinting at a bunch of intermittent oranges and hoping
they're not somehow related to your patch in some non-obvious way.)

Put another way: why are we even framing this in terms of "requiring"?
The goal is to help people contribute code and accept code contributions
by providing explicit structure and automation to minimize tedious
error-prone human effort.  I conjecture that, if it's no harder to do a
Try run than to skip it, there won't be any need for a strict requirement.

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HEADS-UP: Disabling Gecko Media Plugins on older Linux kernels (bug 1120045)

2015-01-29 Thread Jed Davis
On Thu, Jan 29, 2015 at 06:57:30AM +0900, Mike Hommey wrote:
 So, in practice, because the h264 code is not sandboxed on some setups,
 we're disabling it so that vp8, which is not sandboxed either, is used
 instead. We have about the same amount of control over openh264 and
 vp8 code bases. What makes the difference?

This is more a question for the WebRTC module leadership, but: assuming
the attacker can choose the codec (do we always secure the media content
at least as much as the script that set up the session?), the set of
vulnerabilities is the union of the codecs' vulnerabilities, and adding
a codec can only add more of them.

Possibly also relevant: we already prefer VP8 over H.264 on desktop.

--Jed

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


HEADS-UP: Disabling Gecko Media Plugins on older Linux kernels (bug 1120045)

2015-01-28 Thread Jed Davis
Short version: On desktop Linux systems too old to support seccomp-bpf
system call filtering[1], Gecko Media Plugins will be disabled; in
practice, this means OpenH264, which is used for H.264 video compression
in WebRTC.  This will be controlled with a pref, media.gmp.insecure.allow.

[1] Examples of sufficiently new distribution versions: Ubuntu 12.04,
Fedora 18, RHEL/Centos 7, Debian 8.0.

More background: Currently it's difficult to evaluate the security
implications of bugs in OpenH264 with respect to Firefox, because
approximately 99.9% of Firefox instances run it in some kind of OS-level
sandboxing, but the remaining ~0.1% are fully exposed.

Disabling OpenH264 would break WebRTC interoperability with endpoints
that support only H.264 and not VP8, but if those exist then they're
currently also incompatible with Google Chrome.  Also, mobile devices
with hardware H.264 acceleration might prefer it and use more CPU/power
to fall back to VP8.

Given the specifics of this security/usability tradeoff, we're changing
the default to security.

Note that Fedora and Debian already disable OpenH264 by default in their
Firefox/Iceweasel (respectively) builds due to their license policies.

Even more background: Firefox on Windows and OS X can sandbox media
plugins unconditionally, but on Linux the situation is more complicated.
We're using the seccomp-bpf system call filter (CONFIG_SECCOMP_FILTER),
which is available for ~96% of desktop Linux Firefox, with a restrictive
policy.  In principle the classic Chromium sandbox would be more
compatible, but it requires a setuid root executable, whereas
seccomp-bpf needs no special privileges (nor changes to
release/test/install infrastructure).

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-Trivial SpecialPowers Usage Considered Harmful

2014-08-18 Thread Jed Davis
Bobby Holley bobbyhol...@gmail.com writes:
[...]
 If you find yourself itching to do something complicated, write a
 mochitest-chrome test. The default template [2] now generates html files
 (rather then XUL files), so the ergonomics there should be easier than
 before.

 If you don't want to write a mochitest-chrome test for some reason, you can
 also use SpecialPowers.loadChromeScript(uri), which lets mochitest-plain
 asynchronously load a privileged JS file in its own privileged scope.

On e10s-enabled platforms, does loadChromeScript run the script in the
parent process?  There are currently a few mochitests (plain) that are
SpecialPowers'ing nsLocalFile (or other classes that do direct
filesystem access) in the content process, and I'd like to change them
to remote that part of the test to the parent process -- preferably
without reducing test coverage.  (See also: https://bugzil.la/1043470#c6 )

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Jed Davis
On Tue, Dec 03, 2013 at 11:47:48AM -0800, L. David Baron wrote:
 On Tuesday 2013-12-03 10:18 -0800, Brian Smith wrote:
  Also, I would be very interested in seeing size of libxul.so for
  fully-optimized (including PGO, where we normally do PGO) builds. Do
  unified builds help or hurt libxul size for release builds? Do unified
  builds help or hurt performance in release builds?
 
 I'd certainly hope that nearly all of the difference in size of
 libxul.so is debugging info that wouldn't be present in a non-debug
 build.  But it's worth testing, because if that's not the case,
 there are some serious improvements that could be made in the C/C++
 toolchain...

At the risk of stating the obvious, localizing the size change should
be a simple matter of `readelf -WS`.  If we're seeing actual change
in .text size by way of per-directory sort-of-LTO, then that would be
interesting (and could be localized further with the symbol table).

--Jed

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there any reason not to shut down bonsai?

2013-11-21 Thread Jed Davis
On Thu, Nov 21, 2013 at 05:41:27PM -0500, Boris Zbarsky wrote:
 On 11/21/13 3:15 PM, Gavin Sharp wrote:
 It would be good to explore alternatives to Bonsai.
 https://github.com/mozilla/mozilla-central is supposed to have full
 CVS history, right?
 
 Hmm.  Where in there is the equivalent of 
 http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla%2Flayout%2Fhtml%2Fforms%2Fsrc%2FAttic%2FnsGfxTextControlFrame2.cpprev=cvsroot=%2Fcvsroot
 ?
 
 Granted, getting there in bonsai is a pain too, but that's what you
 have to do to get blame across file moves/deletes in CVS...

I had to cheat somewhat, because git doesn't have an Attic, and...
um, are we sure our CVS-to-Git conversion is entirely correct?  I'm seeing
nsGfxTextControlFrame2.{cpp,h} being deleted in commit 252470c83 (bug 129909),
but also that they were replaced by empty files in commit ec1abf4d5 (bug 88049)
which seems wrong given a quick glance through the bug.

Anyway, having cheated and used bonsai to see the commit message for
the deletion to help find the commits mentioned above (am I missing
something, or does git-rev-parse not let you search for touched this
path?), I can do this:

  git blame 'ec1abf4d5^' -- layout/html/forms/src/nsGfxTextControlFrame2.cpp

and there's the blame (as of the first parent of ec1abf4d5, which is the
last revision where the file exists and is non-empty).  Or, on the web:

  
https://github.com/mozilla/mozilla-central/blame/ec1abf4d5^/layout/html/forms/src/nsGfxTextControlFrame2.cpp

Which is... not the prettiest interface to the blame data.

--Jed

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-06-25 Thread Jed Davis
On Mon, Jun 24, 2013 at 08:02:26PM -0700, Justin Lebar wrote:
 Under what circumstances would you expect the code coverage build to break
 but all our other builds to remain green?

Anywhere you're using -Werror.  I ran into this in a past life with
GCC's may-use-uninitialized warning; if it's still an issue, I'd suggest
doing coverage builds with -Wno-error.

If you have any coverage-specific code, then also changes to anything
that depends on -- but this might not be an issue for hosted programs,
which don't have to bring their own runtime.

And there are some assorted weirdnesses that might indirectly break
something: multiprocessor performance falls off a cliff because the
counters are global (unless things have change significantly since I
last dealt with this), and there are interally generated references to
coverage runtime functions that ignore the symbol visibility pragma
and possibly other things.  But Gecko might not be doing enough
interesting low-level things to care.

--Jed

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-05-06 Thread Jed Davis
On Mon, May 06, 2013 at 09:41:08AM -0700, David Dahl wrote:
 KyotoCabinet might make a good backend for a new storage API:
 
 http://fallabs.com/kyotocabinet/

It's released under the GPL, so it's MPL-incompatible, if I understand
correctly.  As for the Kyoto Products Specific FOSS Library Linking
Exception, at http://fallabs.com/license/linkexception.txt -- it
currently lists exactly one library (not us) and seems to indicate that,
even if Gecko were so listed, a Specific Library that re-exports Kyoto
Cabinet's functionality to other applications would not be allowed.

--Jed (not a lawyer)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please upgrade to at least Mercurial 2.5.1

2013-02-21 Thread Jed Davis
On Thu, Feb 21, 2013 at 11:36:15AM +, Gervase Markham wrote:
 The Mercurial download page:
 http://mercurial.selenic.com/downloads/
 offers 2.5.1 for Mac and Windows, but no Linux packages. Can guidance be
 provided as to where to get such things for commonly-run versions of Linux?

Debian experimental has it.  My sources.list looks like this:

deb http://ftp.us.debian.org/debian/ unstable main contrib non-free
deb-src http://ftp.us.debian.org/debian/ unstable main contrib non-free
deb http://ftp.us.debian.org/debian/ experimental main contrib non-free
deb-src http://ftp.us.debian.org/debian/ experimental main contrib non-free

and then it's `apt-get install mercurial/experimental`.  (There's a
priority system such that, by default, that won't upgrade packages to
experimental unless specifically requested or needed as dependencies.)

--Jed

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform