Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Jed Davis
Steve Fink  writes:

> On 07/05/2016 01:33 AM, Julian Hector wrote:
>> If you encounter a crash that may be due to seccomp, please file a bug in
>> bugzilla and block Bug 1280415, we use it to track issues experienced on
>> nightly.
>
> What would such a crash look like? Do they boil down to some system
> call returning EPERM?

The relatively short version: it raises SIGSYS, and the signal handler
can take arbitrary actions (e.g., polyfilling open() to message an
out-of-process broker instead), but the default is currently to log a
message to stderr, invoke the crash reporter[*], and terminate the
process.

--Jed

[*] Also dumps the C stack directly if the crash reporter isn't
available, and the JS stack in either case; both of these are unsafe if
the syscall was in async signal context or had important locks held, but
you were crashing anyway.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Jed Davis
Benjamin Smedberg  writes:

> Assuming these crashes show up in crash-stats.mozilla.com, are there
> particular signatures, metadata, or other patterns that would let us say
> "this crash is caused by a sandbox failure"?

They should, and the expected distinguishing feature is a "Crash Reason"
of "SIGSYS".  I put a certain amount of work into getting the crash
reporting integration to work properly back in the B2G era, and on
desktop for media plugin processes.  (However, there aren't automated
tests to ensure it keeps working; "crashing the content process" isn't a
use case that the test framework docs were very helpful with.)

For additional filtering/faceting, the "crash address" is the syscall
number (but note that its meaning depends on architecture).

One more thing: it might be a good idea to reconsider the
crash-by-default policy for desktop content, at least for release
builds.  On B2G, the decision to do that was informed by the assumption
that builds would be comprehensively tested before being released to end
users, that the presence of crashes under test would block release, and
that this was all running in a relatively fixed environment.
Approximately none of that is true on desktop, especially the last item;
it seems to have worked out for media plugins, but they're much more
self-contained than content, and I don't know how widely used they are.
And there are other options for reporting diagnostic info which trade
off detail for hopefully not crashing.  It looks like there was never a
bug about this, which I guess means I should file one

--Jed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:

> On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>
> > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>
> 24s here. So faster link times and significantly faster clobber times. I'm
> sold!
>
> Any motherboard recommendations? If we want developers to use machines
> like this, maintaining a current config in ServiceNow would probably
> help.


Until the ServiceNow catalog is updated...

The Lenovo ThinkStation P710 is a good starting point (
http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>From the default config:

* Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
* Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
* Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
512 GB SSD as the primary HD. Throw in more drives if you need them.

Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
(plus/minus a few hundred depending on configuration specific).

FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
is absurd (costs >$2000 more when fully configured). Lenovo's
markup/pricing seems reasonable by comparison. Although I'm sure someone
somewhere will sell the same thing for cheaper.

If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
got the
http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
512 GB SSD, the price was very reasonable compared to similar
configurations at Dell, HP, others.

The just-released Broadwell-E processors with 6-10 cores are also nice
(i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
have no links to share. They should be <$2600 fully configured. That's a
good price point between the i7-6700K and a dual socket Xeon. Although if
you do lots of C++ compiling, you should get the dual socket Xeons (unless
you have access to more cores in an office or a remote machine).

If you buy a machine today, watch out for Windows 7. The free Windows 10
upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
out of the box. And, yes, you should use Windows 10 as your primary OS
because that's what our users mostly use. I run Hyper-V under Windows 10
and have at least 1 Linux VM running at all times. With 32 GB in the
system, there's plenty of RAM to go around and Linux performance under the
VM is excellent. It feels like I'm dual booting without the rebooting part.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Lawrence Mandel
On Tue, Jul 5, 2016 at 6:58 PM, Ralph Giles  wrote:

> On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>
> > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>
> 24s here. So faster link times and significantly faster clobber times. I'm
> sold!
>
> Any motherboard recommendations? If we want developers to use machines
> like this, maintaining a current config in ServiceNow would probably
> help.
>

Completely agree. You should not have to figure this out for yourself. We
should provide good recommendations in ServiceNow. I'm looking into
updating the ServiceNow listings with gps.

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Xidorn Quan
On Wed, Jul 6, 2016, at 05:12 AM, Gregory Szorc wrote:
> On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:
> 
> > I work remotely, normally from my laptop, and I have a single (fairly
> > slow) desktop usable as a compile server.
> 
> Gecko developers should have access to 8+ modern cores to compile Gecko.
> Full stop. The cores can be local (from a home office), on a machine in a
> data center you SSH or remote desktop into, or via a compiler farm (like
> IceCC running in an office).

I use my 4-core laptop for building as well... mainly because I found it
inconvenient to maintain development environment on multiple machines.
I've almost stopped writing patches in my personal MBP due to that.

That said, if I can distribute the build to other machines, I'll happily
buy a new desktop machine and use it as a compiler farm to boost the
build.

- Xidorn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Ralph Giles
On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:

> * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s

24s here. So faster link times and significantly faster clobber times. I'm sold!

Any motherboard recommendations? If we want developers to use machines
like this, maintaining a current config in ServiceNow would probably
help.

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 2:37 PM, Ralph Giles  wrote:

> On Tue, Jul 5, 2016 at 12:12 PM, Gregory Szorc  wrote:
>
> >  I recommend 2x Xeon E5-2637v4 or E5-2643v4.
>
> For comparison's sake, what kind of routine and clobber build times do
> you see on a system like this? How much does the extra cache on Xeon
> help vs something like a 4 GHz i7?
>
> My desktop machine is five years old, but it's still faster than my
> MacBook Pro, so I've never bothered upgrading beyond newer SSDs. If
> there's a substantial improvement available in build times it would be
> easier to justify new hardware.
>
> A nop build on my desktop is 22s currently. Touching a cpp file (so
> re-linking xul) is 46s. A clobber build is something like 17 minutes.
>

Let's put it this way: I've built on AWS c4.8xlarge instances (Xeon E5-2666
v3 with 36 vCPUs) and achieved clobber build times comparable to the best
numbers the Toronto office has reported with icecc (between 3.5 and 4
minutes). That's 36 vCPUs @ 2.9-3.2/3.5GHz (base vs turbo single/all cores
frequency).

I don't have access to a 2xE5-2643v4 machine, but I do have access to a 2 x
E5-2637v4 with 32 GB RAM and an SSD running CentOS 7 (Clang 3.4.2 + gold
linker):

* clobber (minus configure): 368s (6:08)
* `mach build` (no-op): 24s
* `mach build binaries` (no-op): 3.4s
* `mach build binaries` (touch network/dns/DNS.cpp): 14.1s

I'm pretty sure the clobber time would be a little faster with a newer
Clang (also, GCC is generally faster than Clang).

That's 8 physical cores + hyperthreading (16 reported CPUs) @ 3.5 GHz. A
2643v4 would be 12 physical cores @ 3.4 GHz. So 28 GHz vs 40.8 GHz. That
should at least translate to 90s clobber build time savings. So 4-4.5
minutes. Not too shabby. And I'm sure they make good space heaters too.

FWIW, my i7-6700K (4+4 cores @ 4.0 GHz) is currently taking ~840s (~14:00)
for clobber builds (with Ubuntu 16.04 and a different toolchain however).
Those extra cores (even at lower clock speeds) really do matter.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 2:27 PM, Chris Pearce  wrote:

> It would be cool if, once distributed compilation is reliable, if `./mach
> mercurial-setup` could 1. prompt you enable using the local network's
> infrastructure for compilation, and 2. prompt you to enable sharing your
> CPUs with the local network for compilation.
>
>
We've already discussed this in build system meetings. There are a number
of optimizations around detection of your build environment that can be
made. Unfortunately I don't think we have any bugs on file yet.


> Distributing a Windows-friendly version inside the MozillaBuild package
> would be nice too.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 2:33 PM, Masatoshi Kimura 
wrote:

> Oh, my laptop has only 4 core and I won't buy a machine or a compiler
> farm account only to develope Gecko because my machine works perfectly
> for all my other puoposes.
>
> This is not the first time you blame my poor hardware. Mozilla (you are
> a Mozilla employee, aren't you?) does not want my contribution? Thank
> you very much!
>

My last comment was aimed mostly at Mozilla employees. We still support
building Firefox/Gecko on older machines. Of course, it takes longer unless
you have fast internet to access caches or a modern machine. That's the sad
reality of large software projects. Your contributions are welcome no
matter what machine you use. But having a faster machine should allow you
to contribute more/faster, which is why Mozilla (the company) wants its
employees to have fast machines.

FWIW, Mozilla has been known to send community contributors hardware so
they can have a better development experience. Send an email to
mh...@mozilla.com to inquire.


>
> On 2016/07/06 4:12, Gregory Szorc wrote:
> > On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:
> >
> >> I work remotely, normally from my laptop, and I have a single (fairly
> >> slow) desktop usable as a compile server.
> >>
> >
> > Gecko developers should have access to 8+ modern cores to compile Gecko.
> > Full stop. The cores can be local (from a home office), on a machine in a
> > data center you SSH or remote desktop into, or via a compiler farm (like
> > IceCC running in an office).
> >
> > If you work from home full time, you should probably have a modern and
> > beefy desktop at home. I recommend 2x Xeon E5-2637v4 or E5-2643v4. Go
> with
> > the E5 v4's, as the v3's are already obsolete. If you go with the higher
> > core count Xeons, watch out for clock speed: parts of the build like
> > linking libxul are still bound by the speed of a single core and the
> Xeons
> > with higher core counts tend to drop off in CPU frequency pretty fast.
> That
> > means slower libxul links and slower builds.
> >
> > Yes, dual socket Xeons will be expensive and more than you would pay for
> a
> > personal machine. But the cost is insignificant compared to your cost as
> an
> > employee paid to work on Gecko. So don't let the cost of something that
> > would allow you to do your job better discourage you from asking for
> > something! If you hit resistance buying a dual core Xeon machine, ping
> > Lawrence Mandel, as he possesses jars of developer productivity
> lubrication
> > that have the magic power of unblocking purchase requests.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Ralph Giles
On Tue, Jul 5, 2016 at 12:12 PM, Gregory Szorc  wrote:

>  I recommend 2x Xeon E5-2637v4 or E5-2643v4.

For comparison's sake, what kind of routine and clobber build times do
you see on a system like this? How much does the extra cache on Xeon
help vs something like a 4 GHz i7?

My desktop machine is five years old, but it's still faster than my
MacBook Pro, so I've never bothered upgrading beyond newer SSDs. If
there's a substantial improvement available in build times it would be
easier to justify new hardware.

A nop build on my desktop is 22s currently. Touching a cpp file (so
re-linking xul) is 46s. A clobber build is something like 17 minutes.

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Masatoshi Kimura
Oh, my laptop has only 4 core and I won't buy a machine or a compiler
farm account only to develope Gecko because my machine works perfectly
for all my other puoposes.

This is not the first time you blame my poor hardware. Mozilla (you are
a Mozilla employee, aren't you?) does not want my contribution? Thank
you very much!

On 2016/07/06 4:12, Gregory Szorc wrote:
> On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:
> 
>> I work remotely, normally from my laptop, and I have a single (fairly
>> slow) desktop usable as a compile server.
>>
> 
> Gecko developers should have access to 8+ modern cores to compile Gecko.
> Full stop. The cores can be local (from a home office), on a machine in a
> data center you SSH or remote desktop into, or via a compiler farm (like
> IceCC running in an office).
> 
> If you work from home full time, you should probably have a modern and
> beefy desktop at home. I recommend 2x Xeon E5-2637v4 or E5-2643v4. Go with
> the E5 v4's, as the v3's are already obsolete. If you go with the higher
> core count Xeons, watch out for clock speed: parts of the build like
> linking libxul are still bound by the speed of a single core and the Xeons
> with higher core counts tend to drop off in CPU frequency pretty fast. That
> means slower libxul links and slower builds.
> 
> Yes, dual socket Xeons will be expensive and more than you would pay for a
> personal machine. But the cost is insignificant compared to your cost as an
> employee paid to work on Gecko. So don't let the cost of something that
> would allow you to do your job better discourage you from asking for
> something! If you hit resistance buying a dual core Xeon machine, ping
> Lawrence Mandel, as he possesses jars of developer productivity lubrication
> that have the magic power of unblocking purchase requests.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Chris Pearce
It would be cool if, once distributed compilation is reliable, if `./mach 
mercurial-setup` could 1. prompt you enable using the local network's 
infrastructure for compilation, and 2. prompt you to enable sharing your CPUs 
with the local network for compilation.

Distributing a Windows-friendly version inside the MozillaBuild package would 
be nice too.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Benjamin Smedberg
Assuming these crashes show up in crash-stats.mozilla.com, are there
particular signatures, metadata, or other patterns that would let us say
"this crash is caused by a sandbox failure"?

That seems like it would be fairly important, so that we can monitor this
in the field.

--BDS

On Tue, Jul 5, 2016 at 5:11 PM, Paul Theriault 
wrote:

>
> > On 6 Jul 2016, at 3:39 AM, Steve Fink  wrote:
> >
> > On 07/05/2016 01:33 AM, Julian Hector wrote:
> >> If you encounter a crash that may be due to seccomp, please file a bug
> in
> >> bugzilla and block Bug 1280415, we use it to track issues experienced on
> >> nightly.
> >
> > What would such a crash look like? Do they boil down to some system call
> returning EPERM?
> >
>
>
> FYI for others since the reply was off-list:
> ---
> It is a crash of the content process, and somewhere in the logs you should
> find an entry similar to this:
>
> "Sandbox: seccomp sandbox violation: pid 5154, syscall 355, args
> 2620711623 0 0 0 3012860244 3077481872.  Killing process."
>
> You could also check by setting: security.sandbox.content.level = 0 and
> see if the problem still exists.
> ---
> There is also a wiki page with more information from b2g that has some
> information.
> https://wiki.mozilla.org/Security/Sandbox/Seccomp#Use_in_Gecko
>
> We will update to make this Linux specific instead of b2g.
>
> Thanks,
> Paul
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Paul Theriault

> On 6 Jul 2016, at 3:39 AM, Steve Fink  wrote:
> 
> On 07/05/2016 01:33 AM, Julian Hector wrote:
>> If you encounter a crash that may be due to seccomp, please file a bug in
>> bugzilla and block Bug 1280415, we use it to track issues experienced on
>> nightly.
> 
> What would such a crash look like? Do they boil down to some system call 
> returning EPERM?
> 


FYI for others since the reply was off-list: 
---
It is a crash of the content process, and somewhere in the logs you should find 
an entry similar to this:

"Sandbox: seccomp sandbox violation: pid 5154, syscall 355, args 2620711623 0 0 
0 3012860244 3077481872.  Killing process."

You could also check by setting: security.sandbox.content.level = 0 and see if 
the problem still exists.
---
There is also a wiki page with more information from b2g that has some 
information. 
https://wiki.mozilla.org/Security/Sandbox/Seccomp#Use_in_Gecko

We will update to make this Linux specific instead of b2g.

Thanks,
Paul


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to enable: touch-action (on Nightly)

2016-07-05 Thread Kartikaya Gupta
In the next few days, I intend to enable support for touch-action on
Nightly (#ifdef NIGHTLY_BUILD), for all platforms. The implementation
is behind the layout.css.touch_action.enabled property. touch-action
is spec'd as part of the Pointer Events spec [1] - the rest of the
spec is currently being implemented in Firefox, but is not yet ready
to be enabled. The touch-action CSS property is relatively independent
of the rest, and can be enabled separately. The patch to turn it on is
in bug 1029631, and the dependency tree has various implementation
bugs if you go deep enough.

Note that turning on touch-action will be detectable by web content on
all platforms (e.g. by checking for the existence of
getComputedStyle(document.body).touchAction). However, the
implementation only does something on platforms/configurations where
we have touch event support. Those platforms are: (1) Android, (2)
Linux with e10s and MOZ_USE_XINPUT2=1 in the environment, (3) Windows
Nightly with e10s enabled [2].

Other UAs have already mostly implemented and shipped touch-action
support [3]. I don't expect much in the way of interop or webcompat
issues with this change, but please file bugs if you notice something
that might be caused by this. Later this month, I will be meeting with
other browser vendors and hammer out any issues that come up, and I
expect to let this ride the trains in Fx 50 or (more likely) 51, along
with general touch event support on Windows.

Cheers,
kats

[1] 
https://www.w3.org/TR/pointerevents/#declaring-candidate-regions-for-default-touch-behaviors
[2] Since touch-enabled Windows devices trigger accessibility
activation, they run with e10s disabled by default, even on Nightly.
You have to set browser.tabs.remote.force-enable=true to actually get
e10s on these devices. The plan is to make e10s accessibility-friendly
in 51 or 52 at which point this caveat goes away.
[3] http://caniuse.com/#search=touch-action
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Steve Fink
I work remotely, normally from my laptop, and I have a single (fairly 
slow) desktop usable as a compile server. (Which I normally leave off, 
but when I'm doing a lot of compiling I'll turn it on. It's old and 
power-hungry.)


I used distcc for a long time, but more recently have switched to icecream.

With distcc, the time to build standalone > time to build on the laptop 
using distcc to use the compile server > time to build standalone 
locally on the compile server. (So if I wanted the fastest builds, I'd 
ditch the laptop and just do everything on the compile server.)


I haven't checked, but I would guess it's about the same story with icecc.

Both have given me numerous problems. distcc would fairly often get into 
a state where it would spend far more time sending and receiving data 
than it saved on compiling. I suspect it was some sort of 
bufferbloat-type problem. I poked at it a little, setting queue sizes 
and things, but never satisfactorily resolved it. I would just leave the 
graphical distcc monitor open, and notice when things started to go south.


With icecream, it's much more common to get complete failure -- every 
compile command starts returning weird icecc error messages, and the 
build slows way down because everything has to fail the icecc attempt 
before it falls back to building locally. I've tried digging into it on 
multiple occasions, to no avail, and with some amount of restarting it 
magically resolves itself.


At least mostly -- I still get an occasional failure message here and 
there, but it retries the build locally so it doesn't mess anything up.


I've also attempted to use a machine in the MTV office as an additional 
lower priority compile server, with fairly disastrous results. This was 
with distcc and a much older version of the build system, but it ended 
up slowing down the build substantially.


I've long thought it would be nice to have some magical integration 
between some combination of a distributed compiler, mercurial, and 
ccache. You'd kick off a build, and it would predict object files that 
you'd be needing in the future and download them into your local cache. 
Then when the build got to that part, it would already have that build 
in its cache and use it. If the network transfer were too slow, the 
build would just see a cache miss and rebuild it instead. (The optional 
mercurial portion would be to accelerate knowing which files have and 
have not changed, without needing to checksum them.)


All of that is just for gaining some use of remote infrastructure over a 
high latency/low throughput network.


On a related note, I wonder how much of a gain it would be to compile to 
separate debug info files, and then transfer them using a binary diff (a 
la rsync against some older local version) and/or (crazytalk here) 
transfer them in a post-build step that you don't necessarily have to 
wait for before running the binary. Think of it as a remote symbol 
server, locally cached and eagerly populated but in the background.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 7:07 AM, Michael Layzell 
wrote:

> I'm certain it's possible to get a windows build working, the problem is
> that:
>
> a) We would need to modify the client to understand cl-style flags (I don't
> think it does right now)
> b) We would need to create the environment tarball
>

There is a script in-tree to create a self-contained archive containing
MSVC and the Windows SDK. Instructions at
https://gecko.readthedocs.io/en/latest/build/buildsystem/toolchains.html#windows.
You only need MozillaBuild and the resulting archive to build Firefox on a
fresh Windows install.


> c) We would need to make sure everything runs on windows
>
> None of those are insurmountable problems, but this has been a small side
> project which hasn't taken too much of our time. The work to get MSVC
> working is much more substantial than the work to get macOS and linux
> working.
>
> Getting it such that linux distributes to darwin machines, and darwin
> distributes to darwin machines is much easier. It wasn't done by us because
> distributing jobs to people's laptops seems kinda silly, especially because
> they may have a wifi connection, and as far as I know, basically every mac
> in this office is a macbook.
>
> The darwin machines simply need to add an `icecc` user, to run the build
> jobs in, and then darwin-compatible toolchains need to be distributed to
> all building machines.
>
> On Mon, Jul 4, 2016 at 7:26 PM, Xidorn Quan  wrote:
>
> > I hope it could support MSVC one day as well, and support distribute any
> > job to macOS machines as well.
> >
> > In my case, I use Windows as my main development environment, and I have
> > a personally powerful enough MacBook Pro. (Actually I additionally have
> > a retired MBP which should still work.) And if it is possible to
> > distribute Windows builds to Linux machines, I would probably consider
> > purchasing another machine for Linux.
> >
> > I would expect MSVC to be something not too hard to run with wine. When
> > I was in my university, I ran VC6 compiler on Linux to test my homework
> > without much effort. I guess the situation shouldn't be much worse with
> > VS2015. Creating the environment tarball may need some work, though.
> >
> > - Xidorn
> >
> > On Tue, Jul 5, 2016, at 07:36 AM, Benoit Girard wrote:
> > > In my case I'm noticing an improvement with my mac distributing jobs
> to a
> > > single Ubuntu machine but not compiling itself (Right now we don't
> > > support
> > > distributing mac jobs to other mac, primarily because we just want to
> > > maintain one homogeneous cluster).
> > >
> > > On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch
> > > 
> > > wrote:
> > >
> > > > On 04/07/2016 22:06, Benoit Girard wrote:
> > > >
> > > >> So to emphasize, if you compile a lot and only have one or two
> > machines
> > > >> on your 100mps or 1gbps LAN you'll still see big benefits.
> > > >>
> > > >
> > > > I don't understand how this benefits anyone with just one machine
> > (that's
> > > > compatible...) - there's no other machines to delegate compile tasks
> > to (or
> > > > to fetch prebuilt blobs from). Can you clarify? Do you just mean "one
> > extra
> > > > machine"? Am I misunderstanding how this works?
> > > >
> > > > ~ Gijs
> > > >
> > > >
> > > >
> > > >> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch <
> > gijskruitbo...@gmail.com
> > > >> >
> > > >> wrote:
> > > >>
> > > >> What about people not lucky enough to (regularly) work in an office,
> > > >>> including but not limited to our large number of volunteers? Do we
> > intend
> > > >>> to set up something public for people to use?
> > > >>>
> > > >>> ~ Gijs
> > > >>>
> > > >>>
> > > >>> On 04/07/2016 20:09, Michael Layzell wrote:
> > > >>>
> > > >>> If you saw the platform lightning talk by Jeff and Ehsan in London,
> > you
> > >  will know that in the Toronto office, we have set up a distributed
> > >  compiler
> > >  called `icecc`, which allows us to perform a clobber build of
> > >  mozilla-central in around 3:45. After some work, we have managed
> to
> > get
> > >  it
> > >  so that macOS computers can also dispatch cross-compiled jobs to
> the
> > >  network, have streamlined the macOS install process, and have
> > refined
> > >  the
> > >  documentation some more.
> > > 
> > >  If you are in the Toronto office, and running a macOS or Linux
> > machine,
> > >  getting started using icecream is as easy as following the
> > instructions
> > >  on
> > >  the wiki:
> > > 
> > > 
> > > 
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
> > > 
> > >  If you are in another office, then I suggest that your office
> > starts an
> > >  icecream cluster! Simply choose one linux desktop in the office,
> > run the
> > >  scheduler on it, and put its IP in the Wiki, then everyone can
> > connect
> > >  to

Re: The integration/autoland repo

2016-07-05 Thread Gregory Szorc
On Tue, Jun 28, 2016 at 11:44 AM, Gregory Szorc  wrote:

> As of a few minutes ago, when you land commits from MozReview they will be
> pushed to https://hg.mozilla.org/integration/autoland instead of
> https://hg.mozilla.org/integration/mozilla-inbound.
>
> For now, think of integration/autoland as just another mozilla-inbound or
> fx-team. In fact, the sheriffs will be treating the autoland repo just like
> they treat inbound and fx-team. But that will change in the weeks ahead.
>
> The integration/autoland repository is purposefully not being integrated
> into https://hg.mozilla.org/mozilla-unified and the Git mirrors. And it
> never will be. This is because integrating the autoland repo into a unified
> repo isn't compatible with things we'll be doing in the future.
>
> *Please do not pull from the autoland repo.* If you do, future changes
> that will introduce divergent commits/DAG heads will make your life painful.
>
> Since you aren't supposed to pull from the autoland repo, *Do not build
> the autoland repo or rebase your work on top of it.* As far as local
> development is concerned, pretend the autoland repo doesn't exist. The
> exception to this is if you absolutely need to pull a revision from it to
> test it explicitly.
>
> Also, the autoland repo is using a new permissions group that only 4
> people have write access to. So don't attempt to push to it because you
> won't be able to.
>
> Please follow bug 1266863 if you wish to track changes to the autoland
> repo and our future plans to get rid of inbound, fx-team, merge commits,
> and most backout commits in the final repo history (making mozilla-central
> history linear without most bad/backed out commits).
>

Since multiple people have been confused by this, I want to state
explicitly that the autoland repo does not require a Try push before
landing. The autoland repo is essentially working the same as the inbound
repo has been. There can still be bustage on the autoland repo. There can
still be backouts.

We're intentionally taking a "walk before you run" attitude towards the
autoland repo and trying not to make too many changes at once. Even with
our conservative approach, we still had a rocky first week (e.g. hook
issues allowed a push they shouldn't have, some sheriffs didn't have proper
permissions, and there were a surprising number of merge conflicts between
autoland and inbound). This week should be better. I apologize if this has
caused any trouble.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MXR permanently offline, please transition to DXR

2016-07-05 Thread Chris H-C
For now, can we get https://mxr.mozilla.org/ to point to something other
than the "Repairs in Progress" hardhat? A redirect to dxr would not be
amiss, methinks.

On Fri, Jul 1, 2016 at 7:50 AM, Panos Astithas  wrote:

> It seems like the awesomebar could at least help you by boosting the
> frecency weight of the new URL compared to the old one, so it can gradually
> (or even not so gradually) be replaced. We are going to fix this in bug
> 737836.
>
>
> Panos
>
> On Fri, Jul 1, 2016 at 3:58 AM, Justin Dolske  wrote:
>
> > This reminds me of a password manager bug we fixed 9 years ago (379997!),
> > where password manager would "helpfully" delete a saved HTTP login if it
> > got a 403 response upon using it. Unsurprisingly, this was a a terrible
> > idea that caused your saved logins to disappear when a site was glitchy.
> >
> > Seem like it would be tough to find a solution to rewrite
> history/bookmarks
> > that works when servers are nice, but also ignores servers that are
> > naughty. Maybe this is addon-fodder for advanced users who want to
> > mass-edit bookmarks and history.
> >
> > Justin
> >
> > On Thu, Jun 30, 2016 at 4:55 PM, Robert O'Callahan  >
> > wrote:
> >
> > > In theory responses 301 and 308 mean "permanent redirect" so the
> browser
> > > could do that for those responses.
> > >
> > > In practice you'd need a lot of data to convince yourself that Web
> > > developers haven't screwed this up too badly. Maybe 308, being newer,
> is
> > > not compromised...
> > >
> > > Rob
> > > --
> > > lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe
> > uresyf
> > > toD
> > > selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> > > rdsme,aoreseoouoto
> > > o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
> > > lurpr
> > > .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt
> sstvr
> > > esn
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR problem?

2016-07-05 Thread Erik Rose
> tried dxr as replacement for lxr yesterday and today and it
> does not seem to work for me.
> Whatever I type into the searchbox the results is just an
> empty "This page was generated by DXR ."?

We are about to squash a JS bug that affects older FFs and Safari: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1283645. Perhaps that's affecting 
you?

> Displaying source like 
> https://dxr.mozilla.org/mozilla-central/source/mobile/android/modules/Prompt.jsm
> works but does not provide any xref links??

Hmm, though we don't underline the links (since that would underline 
practically the whole file), most of the symbols in that file can be clicked to 
find property definitions and references. More JS analysis is to come.

Cheers,
Erik
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Guidance wanted: checking whether a channel (?) comes from a particular domain

2016-07-05 Thread Benjamin Smedberg
As part of plugin work, I'm implementing code in
nsDocument::StartDocumentLoad which is supposed to check whether this
document is being loaded from a list of domains or any subdomains. So e.g.
my list is:

["foo.com", "baz.com"] // expect 15-20 domains in this list, maybe more
later

And I want the following documents to match:

http://foo.com/...
https://foo.com/...
https://subd.foo.com
http://subd.baz.com

But http://www.bar.com would not match.

The existing domain and security checks in nsDocument::StartDocumentLoad
all operate on the nsIChannel, so I suppose that's the right starting point.

I couldn't find an existing API on nsContentUtils to do the check that I
care about. I'm sure that there is a way to do what I want using
nsIScriptSecurityManager, but I'm not sure whether that's the "right" thing
to do or whether this code already exists somewhere.

Reading the APIs, I imagine that I want to do something like this:

contentPrincipal = ssm.getChannelResultPrincipal(channel);
testPrincipal = ssm.createCodebasePrincipalFromOrigin(origin); // Is it ok
that this is scheme-less?
if (testPrincipal.subsumes(contentPrincipal)) -> FOUND A MATCH

Is this the right logic, or is there a simpler way to do this that doesn't
involve creating a bunch of principal objects on every document load? Is
running this logic on every document load a potential perf problem?

--BDS
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR problem?

2016-07-05 Thread Byron Jones

Richard Z wrote:

tried dxr as replacement for lxr yesterday and today and it
does not seem to work for me.
Whatever I type into the searchbox the results is just an
empty "This page was generated by DXR ."?

https://dxr.mozilla.org/mozilla-central/search?q=voice=true

Displaying source like 
https://dxr.mozilla.org/mozilla-central/source/mobile/android/modules/Prompt.jsm
works but does not provide any xref links??


this sounds like https://bugzilla.mozilla.org/show_bug.cgi?id=1283645

dxr uses "let" and "const", which aren't supported in all browsers, 
including older versions of firefox.





--
glob — engineering productivity — mozilla

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR problem?

2016-07-05 Thread Jeff Muizelaar
Is this what you're looking for?
https://dxr.mozilla.org/mozilla-central/search?q=voice

-Jeff

On Sun, Jul 3, 2016 at 5:52 AM, Richard Z  wrote:

> Hi,
>
> tried dxr as replacement for lxr yesterday and today and it
> does not seem to work for me.
> Whatever I type into the searchbox the results is just an
> empty "This page was generated by DXR ."?
>
> https://dxr.mozilla.org/mozilla-central/search?q=voice=true
>
> Displaying source like
> https://dxr.mozilla.org/mozilla-central/source/mobile/android/modules/Prompt.jsm
> works but does not provide any xref links??
>
> Richard
>
> --
> Name and OpenPGP keys available from pgp key servers
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[Firefox Desktop] Issues found: June 27th to July 1st

2016-07-05 Thread Cornel Ionce

Hi everyone,


Here's the list of new issues found and filed by the Desktop Release QA 
Team last week, *June 27 - July 1* (week 26).


Additional details on the team's priorities last week, as well as the 
plans for the current week are available at:


   https://public.etherpad-mozilla.org/p/DesktopManualQAWeeklyStatus



*RELEASE CHANNEL*
none

*BETA CHANNEL
*
ID  Summary Product Component   Is a regression 
Assigned to
1281778 
	[e10s] Mouse wheel scrolling disabled after using accessibility tools 
on Ubuntu

Firefox
General
YES  NOBODY

*
**AURORA CHANNEL*
none

*NIGHTLY CHANNEL*
none

*ESR CHANNEL*
none

Regards,
Cornel.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


DXR problem?

2016-07-05 Thread Richard Z
Hi,

tried dxr as replacement for lxr yesterday and today and it
does not seem to work for me.
Whatever I type into the searchbox the results is just an
empty "This page was generated by DXR ."?

https://dxr.mozilla.org/mozilla-central/search?q=voice=true

Displaying source like 
https://dxr.mozilla.org/mozilla-central/source/mobile/android/modules/Prompt.jsm
works but does not provide any xref links??

Richard

-- 
Name and OpenPGP keys available from pgp key servers

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Mozilla Sheriff Survey is still open for your input!

2016-07-05 Thread Carsten Book
Hi,

a little reminder that the survey is still online for another week. So this
is your chance to give us feedback and tell us your ideas! Every Feedback
helps ! :)

Cheers,
- Tomcat

On Wed, Jun 8, 2016 at 12:54 PM, Carsten Book  wrote:

> Hi,
>
> When we moved to the "inbound" model of tree management, the Tree Sheriffs
> became a crucial part of our engineering infrastructure. The primary
> responsibility of the Sheriffs is and will always be to aid developers to
> easily, quickly, and seamlessly land their code in the proper location(s)
> and ensure that code does not break our automated tests.
>
> But of course there is always room for improvements and ideas how we can
> make things better. In order to get a picture from our Community (YOU!) how
> things went and how we can improve our day-to day-work we created a Survey!
> You can find the Survey here:
>
> http://bit.ly/1tgHtCR
>
> Thanks for taking part in this survey!
>
> Also you can find some of us also in London during the Mozilla All-hands
> if you want to talk to us directly!
>
> Cheers,
>
> - Tomcat
> on behalf of the Sheriffs Team
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Michael Layzell
I'm certain it's possible to get a windows build working, the problem is
that:

a) We would need to modify the client to understand cl-style flags (I don't
think it does right now)
b) We would need to create the environment tarball
c) We would need to make sure everything runs on windows

None of those are insurmountable problems, but this has been a small side
project which hasn't taken too much of our time. The work to get MSVC
working is much more substantial than the work to get macOS and linux
working.

Getting it such that linux distributes to darwin machines, and darwin
distributes to darwin machines is much easier. It wasn't done by us because
distributing jobs to people's laptops seems kinda silly, especially because
they may have a wifi connection, and as far as I know, basically every mac
in this office is a macbook.

The darwin machines simply need to add an `icecc` user, to run the build
jobs in, and then darwin-compatible toolchains need to be distributed to
all building machines.

On Mon, Jul 4, 2016 at 7:26 PM, Xidorn Quan  wrote:

> I hope it could support MSVC one day as well, and support distribute any
> job to macOS machines as well.
>
> In my case, I use Windows as my main development environment, and I have
> a personally powerful enough MacBook Pro. (Actually I additionally have
> a retired MBP which should still work.) And if it is possible to
> distribute Windows builds to Linux machines, I would probably consider
> purchasing another machine for Linux.
>
> I would expect MSVC to be something not too hard to run with wine. When
> I was in my university, I ran VC6 compiler on Linux to test my homework
> without much effort. I guess the situation shouldn't be much worse with
> VS2015. Creating the environment tarball may need some work, though.
>
> - Xidorn
>
> On Tue, Jul 5, 2016, at 07:36 AM, Benoit Girard wrote:
> > In my case I'm noticing an improvement with my mac distributing jobs to a
> > single Ubuntu machine but not compiling itself (Right now we don't
> > support
> > distributing mac jobs to other mac, primarily because we just want to
> > maintain one homogeneous cluster).
> >
> > On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch
> > 
> > wrote:
> >
> > > On 04/07/2016 22:06, Benoit Girard wrote:
> > >
> > >> So to emphasize, if you compile a lot and only have one or two
> machines
> > >> on your 100mps or 1gbps LAN you'll still see big benefits.
> > >>
> > >
> > > I don't understand how this benefits anyone with just one machine
> (that's
> > > compatible...) - there's no other machines to delegate compile tasks
> to (or
> > > to fetch prebuilt blobs from). Can you clarify? Do you just mean "one
> extra
> > > machine"? Am I misunderstanding how this works?
> > >
> > > ~ Gijs
> > >
> > >
> > >
> > >> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch <
> gijskruitbo...@gmail.com
> > >> >
> > >> wrote:
> > >>
> > >> What about people not lucky enough to (regularly) work in an office,
> > >>> including but not limited to our large number of volunteers? Do we
> intend
> > >>> to set up something public for people to use?
> > >>>
> > >>> ~ Gijs
> > >>>
> > >>>
> > >>> On 04/07/2016 20:09, Michael Layzell wrote:
> > >>>
> > >>> If you saw the platform lightning talk by Jeff and Ehsan in London,
> you
> >  will know that in the Toronto office, we have set up a distributed
> >  compiler
> >  called `icecc`, which allows us to perform a clobber build of
> >  mozilla-central in around 3:45. After some work, we have managed to
> get
> >  it
> >  so that macOS computers can also dispatch cross-compiled jobs to the
> >  network, have streamlined the macOS install process, and have
> refined
> >  the
> >  documentation some more.
> > 
> >  If you are in the Toronto office, and running a macOS or Linux
> machine,
> >  getting started using icecream is as easy as following the
> instructions
> >  on
> >  the wiki:
> > 
> > 
> > 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
> > 
> >  If you are in another office, then I suggest that your office
> starts an
> >  icecream cluster! Simply choose one linux desktop in the office,
> run the
> >  scheduler on it, and put its IP in the Wiki, then everyone can
> connect
> >  to
> >  the network and get fast builds!
> > 
> >  If you have questions, myself, BenWa, and jeff are probably the
> ones to
> >  talk to.
> > 
> > 
> >  ___
> > >>> dev-platform mailing list
> > >>> dev-platform@lists.mozilla.org
> > >>> https://lists.mozilla.org/listinfo/dev-platform
> > >>>
> > >>>
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > 

Re: What should we do about legacy Mac font names?

2016-07-05 Thread Anne van Kesteren
On Tue, Jul 5, 2016 at 11:21 AM, Jonathan Kew  wrote:
> A third option is to just do it and see if anyone complains. I notice that
> 'font-family: "רעננה"' doesn't seem to work in either Chrome or Safari
> today, for instance, so perhaps it's safe to assume that usage on the web is
> in the minimal-to-nonexistent range.

If that is the case in general that actually means that continuing to
support these font names is more of a risk for us, since we're not the
dominant browser on Mac.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Enabling seccomp-bpf for content process on nightly Linux desktop

2016-07-05 Thread Julian Hector
Hi everybody,

during the last couple of month, the sandboxing team worked on getting our
seccomp whitelist to a state that allowed us to enable seccomp on nightly
for Linux desktop users.

Our current sandboxing efforts can be tracked through the wiki at:
https://wiki.mozilla.org/Security/Sandbox
https://wiki.mozilla.org/Security/Sandbox/Milestones

Yesterday, the last bug was resolved which blocked us from enabling it. I
am writing to this mailing list to let you know that we will enable seccomp
on nightly for Linux desktop today or tomorrow. (Bug 742434, patches are
currently on inbound)

We performed a lot of tests throughout the last couple of month to keep the
breakage to a minimum, however, we can't test all possible edge cases and
hope to find out more about possible breakage by enabling it on nightly.

It is important to keep in mind that the current sandbox state is only a
very minor improvement, the whitelist contains a lot of potentially
dangerous system calls (for example sys_open). But before we start to work
on tightening the whitelist we first need to see if it even works in the
current state without crashing Firefox.

If you encounter a crash that may be due to seccomp, please file a bug in
bugzilla and block Bug 1280415, we use it to track issues experienced on
nightly.

While we work on fixing the issue, it is also possible to disable seccomp
again by setting

security.sandbox.content.level = 0

in about:config. This way everything should be back to normal.

You can also join #boxing on IRC if you have any questions.

Thanks
Julian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


What should we do about legacy Mac font names?

2016-07-05 Thread Henri Sivonen
Gecko has internal support for a number of legacy Mac encodings that
are not part of the Encoding Standard and that are *only* used for
decoding the names of legacy TrueType fonts.

As part of rewriting our Web-facing decoder/encoder infrastructure per
spec, some explicit action to deal with the font name code is needed
(drop support for non-Encoding Standard font names or support them in
a new non-Encoding Standard way).

Have I understood correctly that the TrueType font name is not exposed
by the Web Platform, is not used for identification by any Web Font
mechanism and is, therefore, only relevant to identifying system
fonts?

Telemetry shows that the usage of these encodings isn't the same
percentage across platforms, which suggests that fonts indeed aren't
coming from the Web but are coming from the system font directory.

Almost all OS X systems that run release-channel Firefox have a font
with a MacHebrew name installed. MacHebrew is non-zero but tiny on
Windows.

MacArabic, MacCE, MacGreek and MacGujarati are almost exclusively seen
on Windows! MacTurkish is more common on Windows than Mac.

MacRomanian and MacGurmukhi are non-zero but tiny.

MacIcelandic, MacDevanagari, MacFarsi and MacCroatian are at zero.

MacRoman and MacCyrillic are part of the Encoding Standard and will
continue to be supported in the form specified in the Encoding
Standard.

Legacy Mac encodings for CJK weren't Mac-specific and will continue to
be supported in the form specified in the Encoding Standard.

We don't support the rest of the legacy Mac encodings.

Since TrueType fonts can have multiple name records, it's possible
that the fonts that have legacy Mac name records also have Unicode or
Windows name records, in which case us dropping support for
(non-Roman, non-Cyrillic, non-CJK) legacy Mac font names wouldn't
break stuff.

How do we find out if we can remove support for (non-Roman,
non-Cyrillic, non-CJK) legacy Mac font names?

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform