Re: Using MozPhab without Arcanist

2019-10-24 Thread Axel Hecht

Am 24.10.19 um 12:13 schrieb Henrik Skupin:

glob wrote on 23.10.19 17:56:


It's available now - make sure you're running the latest version by
running `moz-phab self-update`.


That's what I did yesterday, but as it looks like the self-update
actually didn't update my version to the latest MozPhab-0.1.55. I will
check soon with this version. Thanks!

Henrik



You need to run self-update twice to move over to the pip version.

Also, make sure to not run it while in a virtualenv like I did. 
Otherwise you end up uninstalling and installing it from scratch ;-)


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


mach lint -l l10n coming your way

2019-05-02 Thread Axel Hecht

Hi,

I just landed a new linter into mach, and thus into treeherder and 
phabricator.


It's called `l10n`, and `l1nt` on treeherder. It checks for common 
errors in localizable files, like duplicate strings and parsing errors, 
but also runs some more detailed checks.


On phabricator, it also reports warnings, which might be missed string 
ID changes.


So if a patch doesn't show the strings you're hoping for, a good first 
step is to run ./mach lint -w now.


Together with l10n jobs on try and pseudo localization [1], this should 
make your life easier.


On string changes: The rules remain the same, if you change the 
semantics of a string, you need a new ID. If you just fix a typo or 
grammar, please don't create a new ID. Anything inbetween, feel free to 
ask, often the best choice depends on what's actually in the localizations.


Technical detail: The test works by maintaining a local clone of 
https://hg.mozilla.org/l10n/gecko-strings/ in your ~/.mozbuild, updating 
it after two days. So if you see something cloning while you mach lint, 
that's intended. This comes with the benefit that we're only checking 
against strings we actually expose to localizers. It also checks for 
string IDs we might not use in central, but in beta or release. Which 
also makes this test the right thing to check for landings on beta or 
release branches.


Thanks to Andrew Halberstadt for bearing with me through all the 
refactors while he was trying to help me land this.


There are a few next steps I already know about: For one, run this more 
widely across mozilla projects, for example on our newer Android apps 
[2]. I also have a local branch for ' vs ’, but it's still in its infancy.


If you find issues and/or want enhancements, please file a bug in 
linting and CC me.


Axel

[1] 
https://firefox-source-docs.mozilla.org/intl/l10n/l10n/fluent_tutorial.html#pseudolocalization

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1548500
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Patching FF 52.9esr with v60+ Security updates possible?

2019-04-13 Thread Axel Hecht

Disclaimer, I'm not a security expert, but a couple of thoughts:

We have rewritten 52.x code in Rust, and we have removed features. If 
there are security vulnerabilities in the 52.x versions of that code, 
nobody is going to tell Mozilla. In that sense, it's unlikely that 
Mozilla will ever have the list of known vulnerabilities in your patched 
52.x code base.


Some security fixes we made might be executed by writing components in 
Rust. Just assuming that Rust and the modern compiler toolchain it 
requires is part of your problem, you won't be able to port these fixes.


My personal take is that you may be able to apply a lot of the patches 
that have CVEs, but that's likely not going to get you a code base that 
is similarily secure as the one we're working on.


Axel

Am 13.04.19 um 00:43 schrieb Charles Robertson:

Hi,

I know this sounds like a strange questions. However, we have a very large 
customer who is using our old OS which the last successful build of Firefox ESR 
was 52.9. But because of the massive updates to FF 60 we have been unable to 
get FF 60+ to build on that old OS. This customer has demanded we provide an 
updated Firefox for this old OS so I am asking if it would be possible to patch 
FF 52.9esr with the security updates since 60 was released?

Thanks,
Cheers
   Charles Robertson
   Firefox Maintainer - SUSE




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving our usage of Bugzilla

2019-04-03 Thread Axel Hecht

Am 02.04.19 um 19:24 schrieb Sylvestre Ledru:

Because I had a few discussions about task vs enhancement, a good way to
make the difference between the two use cases is: If I ever need help with
this bug, should it come from someone in Product or an EPM?


Can I ask for clarification? Is it

task - EPM help
enhancement - Product help

or the other way around? As you listed them the other way around, and 
that's not how I'd line them up.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Min clang / libclang requirement was updated not long ago...

2019-02-27 Thread Axel Hecht

Am 27.02.19 um 15:28 schrieb Nathan Froyd:

On Wed, Feb 27, 2019 at 9:05 AM Axel Hecht  wrote:


Am 27.02.19 um 14:39 schrieb Nathan Froyd:

On Wed, Feb 27, 2019 at 6:22 AM Kartikaya Gupta  wrote:

On Wed, Feb 27, 2019 at 3:40 AM Axel Hecht  wrote:


Can we please not force bootstrap?


+1. In general bootstrap isn't "rock solid" enough to force people
into running it.


If people have problems with bootstrap (it doesn't do enough, it
assumes too much about your system, etc. etc.), please file bugs on
what's wrong.  We need to start depending more on bootstrap for
everything, to the point of "you can't depend on X unless it gets
installed via bootstrap", and we can't get to that world if we don't
know what rough edges people find in bootstrap.


Do you have a suggestion on how to do that in practice? Rolling back
from a broken development environment is easily a couple of hours of
work, in the case of homebrew breaking all my virtualenvs, for example.


It's not clear to me what bootstrap does that breaks things.  Do you
want the ability to skip installing everything via homebrew?


The list of things that I consider broken in homebrew is dynamic, and I 
guess not worth enumerating.


I have installed hg outside of homebrew, and using a homebrew python for 
any virtualenv can break at any invocation of homebrew, and as such also 
for building firefox. Any invocation because homebrew recently decided 
to not just auto-update, but also to auto-cleanup.


See

 ls -al obj-firefox-repack/_virtualenvs/init/
total 16
drwxr-xr-x   8 axelhecht  wheel  256 Feb 21 11:58 .
drwxr-xr-x   3 axelhecht  wheel   96 Feb 21 11:58 ..
lrwxr-xr-x   1 axelhecht  wheel   83 Feb 21 11:58 .Python -> 
/usr/local/Cellar/python@2/2.7.15_3/Frameworks/Python.framework/Versions/2.7/Python

drwxr-xr-x  21 axelhecht  wheel  672 Feb 21 11:58 bin
drwxr-xr-x   3 axelhecht  wheel   96 Feb 21 11:58 include
drwxr-xr-x   3 axelhecht  wheel   96 Feb 21 11:58 lib
-rw-r--r--   1 axelhecht  wheel   61 Feb 21 11:58 pip-selfcheck.json
-rw-r--r--   1 axelhecht  wheel   15 Feb 21 11:58 python_exe.txt

and that symlink disappears for each upgrade of python.

And the way that bootstrap interacts with homebrew doesn't allow to pin 
revisions, or at least it broke at some point.


To reiterate, I don't intend to run bootstrap to figure out what damage 
it does. It might cost me a day, and fills me up with anger that's not 
good for anybody.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Min clang / libclang requirement was updated not long ago...

2019-02-27 Thread Axel Hecht

Am 27.02.19 um 14:39 schrieb Nathan Froyd:

On Wed, Feb 27, 2019 at 6:22 AM Kartikaya Gupta  wrote:

On Wed, Feb 27, 2019 at 3:40 AM Axel Hecht  wrote:


Can we please not force bootstrap?


+1. In general bootstrap isn't "rock solid" enough to force people
into running it.


If people have problems with bootstrap (it doesn't do enough, it
assumes too much about your system, etc. etc.), please file bugs on
what's wrong.  We need to start depending more on bootstrap for
everything, to the point of "you can't depend on X unless it gets
installed via bootstrap", and we can't get to that world if we don't
know what rough edges people find in bootstrap.


Do you have a suggestion on how to do that in practice? Rolling back 
from a broken development environment is easily a couple of hours of 
work, in the case of homebrew breaking all my virtualenvs, for example.


Axel


Thanks,
-Nathan



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Min clang / libclang requirement was updated not long ago...

2019-02-27 Thread Axel Hecht

Can we please not force bootstrap?

Background: As a python-heavy engineer, my relationship with homebrew is 
pretty broken these days. There's stuff that's essential to my 
day-to-day business that I had to take out of homebrew, including having 
my own compiles of python. I also have to have my own install of hg, for 
the sake of extensions I use.


./mach bootstrap interferes with those decisions, so I can't run it anymore.

Axel

Am 26.02.19 um 19:17 schrieb Chris Peterson:
Seems like mach bootstrap should have a clobber flag so anyone updating 
build tool dependencies can force people to re-run mach bootstrap (and 
save many people frustration).



On 2/26/2019 10:00 AM, David Major wrote:

Does configure warn about this?

The link between this error and needing to bootstrap is not super
clear (and a surprising number of people don't read dev-platform) so
I'm not looking forward to answering the same question in #build for
the rest of the week. :)

On Tue, Feb 26, 2019 at 12:23 PM Emilio Cobos Álvarez 
 wrote:


... so if you don't use the mozilla-provided libclang (or are using a
very old one), and you see an error like:


error[E0277]: the trait bound
`values::generics::rect::Rect

f32>>:
std::convert::From, 



Please re-run mach bootstrap, or update your libclang.

Thanks!

  -- Emilio
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved "about:config" for Firefox Desktop

2019-01-25 Thread Axel Hecht

Is there a tracking bug for follow-ups?

I'd have a few, adding pref w/out search (*), show add on screen for 
long searches, filter/order by modified, search in values, can't abort edit.


(*) I just realize that I didn't understand how "add" works. Maybe the 
bug is to make that discoverable?


Axel

Am 24.01.19 um 20:31 schrieb Paolo Amadini:

Last year a group of students, Luke, Matthias, and Vincent, designed and
implemented a new version of "about:config" in order to improve the
ergonomics and align the look and feel with other in-content Firefox
pages. They did an amazing job, working with design input from Amy Lee
and with myself doing code reviews.

I'm happy to announce that this work will be available to everyone in
Firefox 67, and can be already used in Nightly at this URL:

     chrome://browser/content/aboutconfig/aboutconfig.html

Most improvements are the natural result of using HTML instead of XUL:

  * There are visible buttons for editing preferences
  * String values are displayed in full as multiline text
  * Find in page works for both names and values
  * Triple click selects one preference name or value quickly
  * Text selection works on multiple preferences
  * The context menu is the same as regular web pages
     - Copy to the clipboard
     - Open selected link
     - Search with your preferred engine
  * Search results don't include spurious value matches anymore
  * Closing and reopening the browser while the tab is pinned
  preserves the search term

We've not just converted the old page, we've designed something new
based on actual use cases, telemetry data, and opportunity cost. We
preferred natural HTML page interactions, for example a double click now
selects text instead of toggling the value. The way the page is explored
with screen readers has also changed, and we've ensured that the new way
is still clear and easy to use.

We're still keeping the old "about:config" around at the following URL
for a while, to mitigate risk related to unforeseen circumstances:

     chrome://global/content/config.xul

Thunderbird will not be affected by this change initially, but at some
point we'll remove the old code from mozilla-central since Thunderbird
will be the only remaining user.


*Performance*

This page can be slower than the old one in some cases. On slower
machines the page may take a moment to display all preferences, if you
so choose. We worked around this by waiting for the first input before
displaying results, as 93% of "about:config" page shows include a search
anyway. Navigation, scrolling, and find in page are then fast.

We've used performance profiling to optimize the page and avoid the
slowest layout modes, but we've not compromised on using the latest
best practices for Firefox Desktop like Fluent localization, which are
anyways in the process of being optimized on their own.

We've explicitly chosen to avoid virtualizing the list, that is only
rendering visible DOM nodes, because this would add complexity that is
not needed for an internal page. It would also nullify most of the
advantages in accessibility and usability that we gained at a low cost
just because we're using a simple HTML table. Effort would be better
spent on optimizing the web for the layout of tables of about 3,000
rows, which would benefit every web site instead of Firefox only.


*Tutorials and screenshots on the web*

While with some features there is a concern that a change would make it
more difficult for users to follow instructions found in older tutorials
on the web, this is much less of a concern in this case, given that the
page caters to experienced users and the changes affect presentation
rather than actual functionality.

In fact, existing information on the web can more easily become obsolete
because the preferences go away or change their meaning, rather than
because of a change in how the values can be changed.


*Features that have not been rewritten*

If the new page is missing a feature that the old one used to have,
there is probably a good reason. Luke added telemetry probes to the
current "about:config" so we know how people use it. It's basically just
one mode of operation across all channels: search, then maybe edit or
add a preference.

There are more details in the history section below, but this is to say
that it is unlikely that we would accept a patch to add back a certain
feature just because it used to be present before. All patches would
have to be motivated by an actual need and include exhaustive
regression tests.

That said, we have ideas for supporting new use cases for browser
developers, like pinning a list of favorites or just showing recently
modified preferences first, but we don't plan to start working on them
before the current version reaches Release.


*More details on history, motivation, and process*

If you're reading this you probably already have a good idea of what
we're talking about, but it's worth stating how we thought about the

Re: C++ standards proposal for a embedding library

2018-07-18 Thread Axel Hecht

CCing snorp.

I guess it's interesting to see how the geckoview API differs from the 
webview API, and which of those differences are related to goal of that 
C++ API, and which are more browser-focused.


And if the C++ API should be also browser-focused, in the end.

Not making any statement on the interesting question of such a stdlib 
thing, and how that impacts choice and innovation on the web space. 
Which should be a more important question probably for mozilla.


Axel

Am 18.07.18 um 21:55 schrieb Botond Ballo:

On Wed, Jul 18, 2018 at 3:32 PM, Jeff Gilbert  wrote:

It feels like the committee is burnt out on trying to solve the
general library problem, but contemplating something massively complex
like this instead doesn't follow, and is an answer to the wrong
question.

Make it easier to integrate libraries and we wouldn't see kludge
proposals like this.


Could you elaborate on the "complexity" and "kludge" aspects?

One of the main complaints about the 2D graphics proposal was that it
was trying to create a new spec in a space where there are existing
mature specs, and that the committee as a group doesn't necessarily
have the depth of domain expertise in graphics necessary to specify a
library like this. This web_view proposal attempts to address that
concern by leveraging existing graphics and other specs from web
standards. So, in a sense, the committee is trying to avoid dealing
with complexity / reuse the work that others have done to tackle the
complexity inherent in the problem space.

If you're referring to the embedding mechanism / API itself being
complex, it would be useful to elaborate on why. The API surface in
the proposed library seems to be quite small.

It's also worth noting that there is prior art in this space in the
form of e.g. the QtWebView and wxWebView APIs, which I believe are
fairly popular in cross-platform C++ applications, suggesting a demand
for this sort of library.

Note that I'm not necessarily advocating for this proposal; I'm just
trying to understand the concerns / feedback better so that I can
communicate them to the proposal authors effectively. If you would
prefer to communicate the concerns to the authors directly, please
feel free to do so.

Thanks,
Botond



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent To Require Manifests For Vendored Code In mozilla-central

2018-04-10 Thread Axel Hecht

A couple of comments:

One thing I'm missing is the ability to do mono-repo imports. Say we 
want to vendor in 
https://github.com/projectfluent/fluent.js/tree/master/fluent-gecko.


For js libraries, we might also want to pay attention to .npmignore 
(others already mentioned hg, so also .hgignore).


There's no spec what happens with patches that fail to apply, or failed 
run_after scripts.


Do we intend to do something if the LICENSE changes? Also, what are we 
supposed to do if the vendored code doesn't have a LICENSE file?


Axel

Am 10.04.18 um 06:25 schrieb glob:
mozilla-central contains code vendored from external sources. Currently 
there is no standard way to document and update this code. In order to 
facilitate automation around auditing, vendoring, and linting we intend 
to require all vendored code to be annotated with an in-tree YAML file, 
and for the vendoring process to be standardised and automated.



The plan is to create a YAML file for each library containing metadata 
such as the homepage url, vendored version, bugzilla component, etc. See 
https://goo.gl/QZyz4xfor the full specification.



We will work with teams to add moz.yaml files where required, as well as 
adding the capability for push-button vendoring of new revisions.



Please address comments to the dev-platform list.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: Short Nightly Shield Study involving DNS over HTTPs (DoH)

2018-03-21 Thread Axel Hecht

I have a couple of further questions:

One is about the legal impact on users. DNS mangling is part of law 
enforcement strategies in many parts of the world (incl Germany). We 
should restrict this experiment to regions where Mozilla knows that 
there's no legal trouble of using DoH and cloudflare. Circumventing law 
enforcement can get pretty hairy in some regions, I suspect.


The other is a bit more detail on the scope of Mozilla's agreement with 
cloudflare extending the experiment. Does our agreement extend to people 
not using Firefox? What happens to folks that in some weird way are 
stuck with the experiment DoH setup once the experiment ends? It'd be a 
great pitch if the agreement was that cloudflare offers this service 
with said terms. If they stopped liking the terms, they'd have to shut 
the service down.


These questions are really only about the scope, not so much about if we 
should do it.


Axel

Am 19.03.18 um 18:08 schrieb Selena Deckelmann:

Hi!

Thanks for all the thoughtful comments about this experiment. The intent of
this work is to provide users privacy-respecting DNS. Status quo for DNS
does not offer many users reasonable, informed choice about log retention,
and doesn't offer encrypted DNS. Users also cannot be reasonably expected
to negotiate on their own with their ISPs/VPN providers for things like
24-hour retention for logs that can be used to create profiles. Today's
default environment (speaking technically wrt lack of encryption and log
storage, and also in terms of the regulatory environment in the US) allows
*all* of this data to be collected indefinitely and sold to third parties.

There's a lot of thinking that went into the agreement we have with
Cloudflare to enable this experiment in a way that respects user privacy.
We also want to explain the impact we think this kind work will have on the
privacy of the Internet. I'd like the team to share this in a blog post
about the experiment, and so have started work with them on it. More on
this shortly!

-selena



On Mon, Mar 19, 2018 at 8:16 AM Daniel Stenberg 
wrote:


On Mon, 19 Mar 2018, Martin Thomson wrote:


I don't know if it is possible to know if you have a manually-configured

DNS

server, but disabling this experiment there if we can determine that

would

be good - that might not be something to worry about with Nightly, but it
seems like it might be needed for this to hit the trains.

How do we otherwise determine that a DNS server is not safe to replace?
Split horizon DNS is going to cause unexpected failures when users -
particularly enterprise users - try to reach names that aren't public.
That's not just an enterprise thing; this will break my home router in

some

ways as well, though I'm actually totally OK with that in this case :)


I don't think it is possible - with any particularly high degree of
certainty
- to know if a DNS server has been manually configured (or even if the term
itself is easy to define). The system APIs for name lookups typically don't
even expose which DNS server they use, they just resolve host names to
addresses for us.

For TRR, we've instead focused pretty hard on providing a
"retry-algorithm" so
that Firefox can (if asked), retry a failed name resolve or TCP connect
without TRR and then "blacklist" that host for further TRR use for a period
into the future.

For hosts that are TRR-blacklisted this way, we also check the next-level
domain of it in the background to see if we should also blacklist the whole
domain from TRR use. Ie if "www.example.com" fails with TRR, it gets
blacklisted, retried with the native resolver and "example.com" is tested
to
see if the entire domain should be blacklisted.

--

   / daniel.haxx.se
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Password autofilling

2018-01-02 Thread Axel Hecht

Am 02.01.18 um 17:22 schrieb Gijs Kruitbosch:

On 01/01/2018 20:08, Jonathan Kingston wrote:

We have the ability to turn off the whole login manager within Firefox
preferences: "Remember logins and passwords for web sites" but no way to
prevent autofill.


There's an about:config pref, as [1] points out, which does this.

I wonder if there's a way to require user interaction only when pages 
contain non-same-origin scripts. Then again, it's not clear that that'd 
be "worth it", in the sense that that would actually significantly 
reduce the number of pages where user interaction would be required, nor 
that it wouldn't make the browser's behaviour less understandable to end 
users (as we would sometimes autofill without interaction, and sometimes 
wouldn't).


In other form code we also care about whether form fields are focusable 
(ie visible, editable etc.), which is something we could also 
potentially use to mitigate these attacks, though it could probably be 
bypassed by having a visible element that is positioned "offscreen" in 
an overflow:hidden container, or something of that sort.


~ Gijs


Or could we start blocking tracking-providers with this practice in general?

As much as this sounds like an arm-race, these providers are only 
valuable if they're on a lot of sites, so this might actually be a 
winnable arm-race.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: INTENT TO DEPRECATE (taskcluster l10n routing)

2017-12-04 Thread Axel Hecht

Am 04.12.17 um 05:42 schrieb Jet Villegas:


On Sun, Dec 3, 2017 at 05:15 Axel Hecht <l...@mozilla.com 
<mailto:l...@mozilla.com>> wrote:


Am 01.12.17 um 16:45 schrieb Justin Wood:
> Hey Everyone,
>
> tl;dr if you don't download nightly l10n repacks via taskcluster
index
> routes, this does not affect you.
>
> Up until recently you could only find nightly l10n repacks with the
> following routes:
>
> *
>

.gecko.v2.{project}.revision.{head_rev}.{build_product}-l10n.{build_name}-{build_type}.{locale}
> *
>

.gecko.v2.{project}.pushdate.{year}.{month}.{day}.{pushdate}.{build_product}-l10n.{build_name}-{build_type}.{locale}
> *
>

{index}.gecko.v2.{project}.latest.{build_product}-l10n.{build_name}-{build_type}.{locale}
>
> Recently I have updated the routing to match that of regular
Nightlies,
> specifically one such route is:
>
>

gecko.v2.mozilla-central.nightly.revision.a21f4e2ce5186e2dc9ee411b07e9348866b4ef30.firefox-l10n.linux64-opt

That's followed by locale code, right? I found


gecko.v2.mozilla-central.nightly.revision.de1f7a92e8726bdd365d4bbc5e65eaa369fbc20a.firefox-l10n.macosx64-opt.de

<http://gecko.v2.mozilla-central.nightly.revision.de1f7a92e8726bdd365d4bbc5e65eaa369fbc20a.firefox-l10n.macosx64-opt.de>

> This deprecation is in preparation of actually building l10n
repacks on
> (nearly) every code checkin, rather than just on nightlies.

Does that mean that you're deprecating all but that route, or are
there
more?

> Let me know if there are any questions or concerns.

No concerns, just curiousity. We're not running any tests on localized
builds at this point, right?


I hope we can change that (testing on localized builds) with this 
proposed change. We’ve gotten reports that localized builds (and 
related usage; e.g., input method editors) cause A11y API activation, 
which triggers other bugs for us.


My gut reaction is "that shouldn't happen", though, well, no idea what 
IMEs do. Do we have bugs tracking these? I'd love to be on CC on those.


As for running tests, we have 100 localizations and 5 platforms, we'll 
need to be pretty conservative on which tests we run so that we don't 
blow up our budget by a factor of 500. Also, many of our tests actually 
hard-code en-US artifacts, like "ensure that the button on this dialog 
says "Save", and thus will fail when run on localized builds. I don't 
have a list, though.


Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: INTENT TO DEPRECATE (taskcluster l10n routing)

2017-12-03 Thread Axel Hecht

Am 01.12.17 um 16:45 schrieb Justin Wood:

Hey Everyone,

tl;dr if you don't download nightly l10n repacks via taskcluster index
routes, this does not affect you.

Up until recently you could only find nightly l10n repacks with the
following routes:

*
.gecko.v2.{project}.revision.{head_rev}.{build_product}-l10n.{build_name}-{build_type}.{locale}
*
.gecko.v2.{project}.pushdate.{year}.{month}.{day}.{pushdate}.{build_product}-l10n.{build_name}-{build_type}.{locale}
*
{index}.gecko.v2.{project}.latest.{build_product}-l10n.{build_name}-{build_type}.{locale}

Recently I have updated the routing to match that of regular Nightlies,
specifically one such route is:

gecko.v2.mozilla-central.nightly.revision.a21f4e2ce5186e2dc9ee411b07e9348866b4ef30.firefox-l10n.linux64-opt


That's followed by locale code, right? I found

gecko.v2.mozilla-central.nightly.revision.de1f7a92e8726bdd365d4bbc5e65eaa369fbc20a.firefox-l10n.macosx64-opt.de


This deprecation is in preparation of actually building l10n repacks on
(nearly) every code checkin, rather than just on nightlies.


Does that mean that you're deprecating all but that route, or are there 
more?



Let me know if there are any questions or concerns.


No concerns, just curiousity. We're not running any tests on localized 
builds at this point, right?


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove some preferences override support

2017-11-02 Thread Axel Hecht
Looping in mkaply explicitly, if that has impact on organizational 
deployments.


Axel

Am 02.11.17 um 00:41 schrieb Nicholas Nethercote:

Greetings,

In https://bugzilla.mozilla.org/show_bug.cgi?id=1413413 I am planning to
remove a couple of things relating to preferences.

1) Remove the defaults/preferences directory support for extensions. This
is a feature that was used by legacy extensions but is not used by
WebExtensions.

2) Remove the "preferences" override directory in the user profile.
This removes
support for profile preferences override files other than user.js.

The bug has a patch with r+. The specific things it removes include:
- The "load-extension-default" notification.
- The NS_EXT_PREFS_DEFAULTS_DIR_LIST/"ExtPrefDL" directory list, including
the entry from the toolkit directory service.

Does anybody foresee any problems with this change?

Thanks.

Nick



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to tab min-width

2017-10-04 Thread Axel Hecht

Am 04.10.17 um 18:43 schrieb Jeff Griffiths:

Om my system ( retina macbook pro ) 70 is starting to look like a better
compromise for tab readability.

How I have been testing this:

- change the value to a specific number, say 70
- open enough tabs so that overflow triggers, then close two tabs, then
open a tab ( we retain overflow until 2 tabs have been closed! )
- count the number of tabs opened
- open chrome and open that number of tabs
- compare the utility of each browser


I tested 70 and 75 (which Aaron suggested), and so far 75 is OK, 70 is 
crossing the border to my tab claustrophobia.


In particular on 50, I had trouble finding the right hit targets to 
select tabs or close them. And 70 still feels close to that, while 75 
for me personally doesn't.


I'll run with 75 for a couple more days.

And yes, the profiles I'm trying this with are mostly tabs on similar 
sites, so the favicons don't provide any practical value.


Axel



Jeff

On Wed, Oct 4, 2017 at 9:37 AM, Marco Bonardo  wrote:


On Tue, Oct 3, 2017 at 10:36 PM, Jeff Griffiths 
wrote:

1. do you prefer the existing behaviour or the new behaviour?
2. if you prefer a value for this pref different than 50 or 100, what
is it? Why?


I prefer being able to see a minimum part of the title, because I very
often have multiple tabs open on the same page (many bugzilla, many
searchfox, many crash-stats) and now I cannot distinguish them at all.
But at the same time, I never liked much the scrolling behavior, at a
point that when my tabs start scrolling, I begin a cleaning taks to
close some of them.
Looks like I'm unhappy in both cases, sorry. If I'd really have to
pick, I'd probably would like to see the first 10 chars of the title.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Changes to how we localize Firefox - one repository to rule a hundred repositories

2017-09-07 Thread Axel Hecht

Hi,

tl;dr: We'll be using a single French, German, Slovenian localization 
across all of mozilla-central, -beta, -release, -esr-*, starting with 
57. This change will ride the trains.


We call it "cross channel localization", or x-channel in short.

How does that work?

We're creating an artificial repository containing the strings from 
mozilla-central and comm-central. This will expand to contain also the 
strings on -beta, -release, etc, as the change rides the trains. This 
will be one repository, with a single head. See 
https://hg.mozilla.org/users/axel_mozilla.com/en-US for a current draft.


We'll use our existing workflows to create localizations for that 
repository, and those will be hosted in 
https://hg.mozilla.org/l10n-central/.


We'll use those repositories to just build as usual, from m-c, m-b, etc.

Developer impact:

We'll need string IDs to be unique across channels. But that's really 
it. We'll help with a test running in automation checking that, see 
https://bugzilla.mozilla.org/show_bug.cgi?id=1353680.


Benefits for Firefox:

We'll be shipping localizations quicker. We'll be shipping fixes to 
localizations quicker. We'll be dealing with string problems in a 
straight forward way.


Overall, we're expecting localizers to be more involved, and more 
frequently involved, and to generally increase product quality by 
removing obstacles and confusing bits and pieces of where to fix a thing 
and how to make that fix stick.


Are we done yet?

No, there's going to be a number of things we'll need to improve after 
the initial landing. For one, we want to run the tool that creates the 
repository in automation. I'll run it at home for a while to tell the 
difference between problems in my code and your code.
The other interesting bit is going to be to deal with closed heads and 
debugparents as part of merge days. But that has 6 weeks time, as the 
next couple of merges are only forks for x-channel for now.


Timeline:

We've been talking to various stakeholders in the Firefox team, to our 
VCS friends, to localizers for quite some time. Firefox 57 is a force in 
actually landing this, to deliver on our quality expecations. But also 
to deal with the slightly different merge plans that Sylvestre announced 
earlier today. Actually shipping x-channel with 57 is the only really 
sane and polite way to deal with that plan.


The tracking bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1353655.

Happy to answer questions

Axel

PS: a hundred repositories to rule them all? Yeah, we're moving from 3-4 
for each of our ~100 localizations to one for each. We intend to 
continue to ship a different version per locale and Firefox product 
going forward, and that's just easier to digest with independent 
repositories per locale.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mac/Win32/Win64 Nightly builds for some locales aren't being updated since 08-08

2017-08-11 Thread Axel Hecht

Am 11.08.17 um 11:07 schrieb rodrigo.mcu...@hotmail.com:

Can someone enlighten me on why some locales' Nightly builds are not being 
updated since August 8th?

This probably isn't the best place to post this, but I don't know if this is 
intended or not so I rather ask here than submit a bug.

Thanks!



Please submit a bug if it's not working today, seems like 
https://bugzilla.mozilla.org/show_bug.cgi?id=1389260 got fixed for 
today's nightly. Also CC me?


Thanks

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator and confidential reviews

2017-08-09 Thread Axel Hecht

To answer the question not asked ;-)
I think we should strive to have as few people as possible with general 
access to security bugs. The concerns the folks have when crossing 
borders is awful. And just from a general risk profile. Saying that as 
someone that neither has nor wants access to security bugs in general.
So, in that sense, I think we should make this a general assumption, 
that the folks writing patches and doing reviews are not in group of the 
bug.
As to mirroring the information, I think it'd be good if there couldn't 
be people on phabricator that are not on the bug. That way, the folks on 
the bug that manage the security risk of it have one way of tracking the 
visibility of the security issue.
I could see how not everybody involved in the bug automatically wants 
all the bugmail. I personally wouldn't mind if I could opt-in to that, 
but I'd be annoyed if I had to ask folks to manually add me again, after 
I asked them to CC me to the bug. More a question if it's technically 
feasible, could folks ask phabricator for access by following the link, 
and it would go back and check bugzilla on-demand if it should do that? 
Depends a bit on the additional private-attachment thing that Nicolas 
mentioned.

Axel

Am 09.08.17 um 02:30 schrieb Mark Côté:

(Cross-posted to mozilla.tools)

Hi, I have an update and a request for comments regarding Phabricator and 
confidential reviews.

We've completed the functionality around limiting access to Differential revisions (i.e. code 
reviews) that are tied to confidential bugs.  To recap the original plan, various security groups 
in BMO are mirrored to Phabricator as "projects", containing the same set of users.  When 
a bug has such a security group added to it, e.g. dom-core-security, thus restricting its 
visibility largely to members of that group, a Phabricator "policy" is similarly set on 
any associated revisions, restricting their visibility to the same group of people (plus the author 
of the revision, if they are not in the project already).

However, users outside of the security group(s) can see confidential bugs if 
they are involved with them in some way.  Frequently the CC field is used as a 
way to include outsiders in a bug.

Phabricator has a similar feature, called "subscribers", which, as with CCs, 
both grants visibility to confidential revisions and also sends email updates when the 
revision changes.  It was suggested that we attempt to synchronize CC and subscriber 
lists.

First I want to double check that this is truly useful.  I am not sure how 
often CCed users are involved with confidential bugs' patches (I might be able 
to ballpark this with some Bugzilla searches, but I don't think it would be 
easy to get a straight answer).  Anecdotally I have been told that a lot of the 
time users are CCed just to be informed of the problem, e.g. a manager might 
want to be aware of a vulnerability.  Given that adding subscribers to a 
revision is just as easy as CCing a user on a bug, if it is infrequent that 
outsiders need to be involved in reviewing confidential patches, I lean towards 
taking the simple route of making this manual.

However if this is more common than I suspect, then we must decide how to 
synchronize the lists.  The most straightforward approach is one-way 
synchronization from BMO, that is, anyone CCed on the bug will automatically be 
added as a subscriber to any associated revisions, but anyone manually added to 
the subscribers list who is not CCed on the bug would be automatically removed 
by the BMO-Phabricator synchronization routine.  The alternative is to keep 
track of who was manually added and who was automatically synchronized, which 
gets complicated rather quickly, both in terms of implementation and usability.

The second question that would come up is whether this synchronization should 
apply to all revisions or just confidential ones.  Given the dual nature of 
CCs/subscribers, for both visibility and notifications, I lean towards only 
doing this synchronization for confidential revisions, where it is more 
important.  A further justification for limiting the mirroring is that 
Phabricator has a much more powerful and fine-grained notification system 
(Herald) than BMO's product- and component-watching feature.  Automatic 
mirroring everywhere would reduce the utility of the former.

If you have any thoughts on this, please reply.  I'll answer any questions and 
summarize the feedback with a decision in a few days.  Note that we can, of 
course, try a simple approach to start, and add in more complex functionality 
after an evaluation period.

To sum up, there are three questions:

1. Is mirroring a confidential bug's CC list to association Differential 
revisions' subscriber lists actually useful?  That is, does the utility justify 
the cost of implementation and maintenance?

2. If yes, is one-way mirroring, from BMO to Differential, sufficient?

3. Again if #1 is yes, should such 

Creating a localized build locally

2017-08-04 Thread Axel Hecht

Hi,

cross-posting this from 
https://blog.mozilla.org/l10n/2017/08/04/create-a-localized-build-locally/.


Yesterday we changed the way that you create localized builds on 
mozilla-central.


This works for developers doing regular builds, as well as developers or 
localizers without a compile environment. Sadly, users of artifact 
builds are not supported [1].


For language packs, a mere

./mach build langpack-de

will work. If you’d rather wish to build a localized package, you’ll 
want to get the package first. If you’re building yourself, that’s


./mach package

and if you want to get a Nightly build from archive.mozilla.org, just

./mach build wget-en-US

If you want to do that for Firefox for Android, you’ll need to specify 
which platform you want. Set EN_US_BINARY_URL to the 
latest-mozilla-central-* path for the binary you want to test. If you 
have a good suggestion for a default, we'd need something like 
https://hg.mozilla.org/mozilla-central/rev/64a69b2cebbb.


And then you just

./mach build installers-fr

That’ll take care about getting the french l10n repository, and do all 
the necessary things to get you a nice little installer/package in dist. 
Pick your favorite language from our repositories [2]. Care for a RTL 
build? ./mach installers-fa will get you a Persian build .


As with other repositories we clone into ~/.mozbuild, you’ll want to 
update those every now and then. They’re in l10n-central/*, a repository 
for each language you tried. Either via hg or git-cinnabar, depending on 
your m-c checkout.


Documentation is on 
https://gecko.readthedocs.io/en/latest/build/buildsystem/locales.html, 
bugs go to Core Build [3]. This works for Firefox, Firefox for Android, 
and Thunderbird.


And now you can safely forget all the things you never wanted to know 
about localized builds.


Axel

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1387485
[2] https://hg.mozilla.org/l10n-central/?sort=lastchange
[3] 
https://bugzilla.mozilla.org/enter_bug.cgi?product=Core=Build%20Config=l...@mozilla.com

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-18 Thread Axel Hecht

Am 17.07.17 um 21:43 schrieb Ted Mielczarek:

Nick,

Thanks for kicking off this discussion! I felt like a broken record
talking to people about this in SF. From my perspective Rust is our
single-biggest competitive advantage for shipping Firefox, and every
time we choose C++ over Rust we throw that away. We know the costs of
shipping complicated C++ code: countless hours of engineering time spent
chasing down hard-to-reproduce crashes, exploitable security holes, and
threading issues. Organizationally we need to get to a point where every
engineer has the tools and training they need to make Rust their first
choice for writing code that ships with Firefox.

On Mon, Jul 10, 2017, at 09:15 PM, Bobby Holley wrote:

I think this is pretty uncontroversial. The high-level strategic decision
to bet on Rust has already been made, and the cost of depending on the
language is already sunk. Now that we're past that point, I haven't heard
anyone arguing why we shouldn't opt for memory safety when writing new
standalone code. If there are people out there who disagree and think
they
have the arguments/clout/allies to make the case, please speak up.



From my anecdotal experiences, I've heard two similar refrains:

1) "I haven't learned Rust well enough to feel confident choosing it for
this code."
2) "I don't want to spend time learning Rust that I could be spending
just writing the code in C++."

I believe that every developer that writes C++ at Mozilla should be
given access to enough Rust training and work hours to spend learning it
beyond the training so that we can eliminate case #1. With the Rust
training sessions at prior All-Hands and self-motivated learning, I
think we've pretty well saturated the group of early adopters. These
people are actively writing new Rust code. We need to at least get the
people that want to learn Rust but don't feel like they've had time to
that same place.


I've been at (maybe half) a rust training at an allhands, and recently 
found myself looking at writing some code in rust. The experience was 
more about understanding other people's code, and re-using parts of it. 
Given that experience, I'd like to ask for a few more things:


Readable Rust. We spent half-n-hour on 3 lines of code, and that 
shouldn't be like that. I'm not sure if that was because the code was 
written badly, or because reading rust code requires dedicated training.


Copy-n-paste Rust code. That, to my experience, doesn't work like in any 
other language we frequently use. I'm used to copy, cut out the thing 
that it did originally, and the incrementally fill in my stuff. That 
works in most languages, but in Rust, it seems to break really really bad.


Documentation improvements. I've hit quite a few documentation pieces 
that stated the existence of the thing I was looking for. My place of 
failure was rc, which seems to have gotten quite a bunch of doc updates 
in the meantime, which is good.


I guess what I'm asking for is training on how to deal with rust code 
that other people wrote, maybe more so than writing rust code from 
scratch, starting with hello-world.


Axel



For case #2, there will always be people that don't want to learn new
languages, and I'm sympathetic to their perspective. Learning Rust well
does take a large investment of time. I don't know that I would go down
the road of making Rust training mandatory (yet), but we are quickly
going to hit a point where "I don't feel like learning Rust" is not
going to cut it anymore. I would hope that by that point we will have
trained everyone well enough that case #2 no longer exists, but if not
we will have to make harder choices.

  

The tradeoffs come when the code is less standalone, and we need to weigh
the integration costs. This gets into questions like whether/how Rust
code
should integrate into the cycle collector or into JS object reflection,
which is very much a technical decision that should be made by experts. I
have a decent sense of who some of those experts might be, and would like
to find the most lightweight mechanism for them to steer this ship.


We definitely need to figure out an ergonomic solution for writing core
DOM components in Rust, but I agree that this needs a fair bit of work
to be feasible. Most of the situations I've seen recently were not that
tightly integrated into Gecko.

-Ted



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Scope of XML parser rewrite?

2017-05-24 Thread Axel Hecht

Am 24.05.17 um 09:34 schrieb Anne van Kesteren:

On Tue, May 23, 2017 at 8:23 PM, Eric Rahm  wrote:

I was hoping to write a more thorough blog post about this proposal (I have
some notes in a gist), but for now I've added comments inline. The main
takeaway here is that I want to do a bare-bones replacement of just the
parts of expat we currently use. It needs to support DTD entities, have a
streaming interface, and support XML 1 v4. That's it, no new features, no
rewrite of our entire XML stack.


"XML5" supports entities (at least my original version did), I think
the main problem is that there's no support for external DTDs. Not
sure how much that differs from parsing the internal subset. Either
way, that's always been a feature that as far as the web is concerned
is not supported so could conceivably be a Firefox-only thing. Only
XUL needs it.


Technical correction, our use of DTDs is independent of XUL, we use the 
same thing for XHTML UI parts. Which is the reason why we're not using 
HTML there.


We do intend to get rid of it, that's what L20n and Fluent are for, and 
we're more than happy to see more people fight for that :-)


Truth be told, though, we can only drop support when the last bit of UI 
is converted to L20n, and not just in Firefox, but also the other stuff. 
Y'know, Thunderbird, too, I guess.


Axel


My current goal is a drop-in replacement for expat with just the features
gecko cares about, so just 1.0 version 4 I guess. It's possible whatever we
end up with could be merged with another library when XML5 is settled, but I
don't want to wait for that.


Contrary to Henri, I think XML 1.0 edition 5 (which isn't "XML5") is
worth considering given
https://bugzilla.mozilla.org/show_bug.cgi?id=501837. It's what Chrome
ships and our current implementation doesn't seem to align with either
the 4th or 5th edition of XML 1.0.




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Scope of XML parser rewrite?

2017-05-23 Thread Axel Hecht

Am 23.05.17 um 16:01 schrieb Daniel Fath:

So, if I understand this correctly - We'll first need to land this
component in Firefox, right? And if it proves itself fine, then formalize
it.


I was thinking of having resolutions for the issues that are currently
warnings in red and multi-vendor buy-in. (Previously, Tab from Google
was interested in making SVG parsing non-Draconian, but I have no idea
how reflective of wider buy-in that remark was.)


You also mentioned warnings in red and multi-vendor buy-in. What does that
entail?

Will lack of support for DTD be a problem? In XML5 it was decided, that
instead of parsing DTD we just store list of named character references
from
https://html.spec.whatwg.org/multipage/syntax.html#named-character-references.
While we could add another list and expand entities.json. It's possible I
need to update spec to reflect that.


Yes, not parsing DTDs would be a deal-breaker for the foreseeable 
future, as we're abusing DTDs to localize X(H)TML documents.


Axel



PS. I hope I'm not spamming you guys too hard, I'm kind of new to the
mailing list thing.

Daniel Fath,
daniel.fa...@gmail.com



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Future of out-of-tree spell checkers?

2017-03-22 Thread Axel Hecht

Am 22.03.17 um 15:39 schrieb Jorge Villalobos:

On 3/22/17 8:10 AM, Henri Sivonen wrote:

On Wed, Mar 22, 2017 at 3:52 PM, Nicolas B. Pierron
 wrote:

On 03/22/2017 09:18 AM, Henri Sivonen wrote:


Without XPCOM extensions, what's the story for out-of-tree spell checkers?

[…], which implements
mozISpellCheckingEngine in JS and connects to the libvoikko[1] back
end via jsctypes. […]



Would compiling libvoikko to WebAssembly remove the need for jsctypes and
XPCOM?


It would remove the need for jsctypes, but how would a WebAssembly
program in a Web Extension get to act as a spell checking engine once
extensions can no longer implement XPCOM interfaces
(mozISpellCheckingEngine in this case)?



Note there is a bug on file to implement an spell-checker API for
WebExtensions: https://bugzilla.mozilla.org/show_bug.cgi?id=1343551

The API request was approved but is low priority.

Jorge



Note, that bug seems about using an API like mozISpellCheckingEngine 
from web extensions.


It doesn't seem to be about providing an implementation of it via a web 
extension.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


fyi, ./mach hangs on terminal-notifier? brew update to 1.7.1

2016-10-05 Thread Axel Hecht

Hi,

as an fyi, I almost filed a bug on mach hanging on terminal-notifier 
after the end of a build or packaging step.


Seems that was a bug in terminal-notifier 1.7.0, another brew 
update/upgrade updated that to 1.7.1 and fixed it.


Just in case you've been in that boat.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removal of B2G from mozilla-central

2016-10-04 Thread Axel Hecht

On 04/10/16 17:40, Fabrice Desre wrote:

On 10/04/2016 08:34 AM, Axel Hecht wrote:


I'd favor to remove at least anything related to l10n from b2g. It never
really worked, and is a half-maintained copy of the almost-working stuff
in mobile.

In my local branches that try to create a test on broken l10n
infrastructure, both mobile and b2g show up, and my preferred way to fix
b2g would be to remove it (the l10n parts).


What b2g specific l10n parts are you talking about? Can you provide links?



https://dxr.mozilla.org/mozilla-central/source/b2g/locales/jar.mn has a 
bunch of overrides, and many of those are wrong also for fennec.


b2g isn't as horrible as mobile is as it doesn't use the "browser" 
chrome package, but still.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removal of B2G from mozilla-central

2016-10-04 Thread Axel Hecht

On 04/10/16 12:16, Gabriele Svelto wrote:

* b2g

   ~20K lines which would also drop considerably due to the removal of
the APIs, completely self-contained



I'd favor to remove at least anything related to l10n from b2g. It never 
really worked, and is a half-maintained copy of the almost-working stuff 
in mobile.


In my local branches that try to create a test on broken l10n 
infrastructure, both mobile and b2g show up, and my preferred way to fix 
b2g would be to remove it (the l10n parts).


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Questions about bindings for L20n

2016-06-14 Thread Axel Hecht

On 14/06/16 05:06, zbranie...@mozilla.com wrote:

On Monday, June 13, 2016 at 9:39:32 AM UTC+1, Gijs Kruitbosch wrote:
 > Separately, the documentation put forth so far seems to indicate that

the localization itself is also async, on top of the asyncness of the
mutationobserver approach, and that could potentially result in flashes
of unlocalized content, depending on "how" asynchronous that API really
ends up being. (AFAIK, if the API returned an already-resolved promise,
there might be less chance of that than if it actually went off and did
IO off-main-thread, then came back with some results.)


The DOM localization that is used in response to MutationObserver is sync.



... unless strings trigger a load, either if the initial suite of 
localizations isn't loaded yet, or the loaded strings trigger a runtime 
error, which requires more l10n files to be loaded. That's obviously 
cached, so it happens at first occasion.


Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Triage Plan for Firefox Components

2016-03-30 Thread Axel Hecht

Hi Emma,

for those of us that are addicted to data: You have about a 1000 bugs of 
data, and I'd love to hear some of the good parts, and maybe also some 
of the bad parts.


Also, you tested on three teams, and you report a success story from 
one. Could you frame that a bit? Is that within the expectations, or 
above or below?


Axel

On 29/03/16 22:07, Emma Humphries wrote:

tl;dr

In Quarter Two I'm implementing the work we’ve been doing to improve
triage, make actionable decisions on new bugs, and prevent us from shipping
regressions in Firefox.

Today I’m asking for feedback on the plan which is posted at:

https://docs.google.com/document/d/1FFrtS0u6gNBE1mxsGJA9JLseJ_U6tW-1NJvHMq551ko

Allowing bugs to sit around without a decision on what we will do about
them sends the wrong message to Mozillans about how we treat bugs, how we
value their involvement, and reduces quality.

The Firefox quality team (myself, Mike Hoye, Ryan VanderMeulen, Mark Cote,
and Benjamin Smedberg) want to make better assertions about the quality of
our releases by giving you tools to make clear decisions about which bugs
must be fixed for each release (urgent) and actively tracking those bugs.
What We Learned From The Pilot Program

During the past 6 weeks, we have prototyped and tested a triage process
with the DOM, Hello, and Developer Tools teams.

Andrew Overholt, who participated in the pilot for the DOM team, said, “A
consistent bug triage process can help us spread the load of watching
incoming bugs and help avoid issues falling through the cracks."

During the pilot, the DOM team uncovered critical bugs quickly so that
people could be assigned to them.

The pilot groups also found that the triage process needs to be fast and
have tooling to make going through bugs fast. It’s easy to fall behind on
triage for a component, but if you stay up to date it will take no more
than 15 minutes a day.

You can find the bugs we triaged during the pilot by looking for whiteboard
tags containing ‘btpp-’.

It is also important to have consistent, shared definitions for regression
across components so triagers do not waste effort on mis-labeled bugs.
Comments?

I am posting this plan now for comment over the next week. I intend to
finalize the triage plan for implementation by Tuesday, April 5th. Feedback
and questions are welcome on the document, privately via email or IRC
(where I’m emceeaich) or on the bugmast...@mozilla.org mailing list.
Timeline

January: finish finding component responsible parties

February: pilot review of NEW bugs with four groups of components, draft
new process

Now: comment period for new process, finalize process

Q2: implement new process across all components involved in shipping Firefox
Q3: all newly triaged bugs following the new process

-- Emma Humphries, Bugmaster



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposing preferring Clang over GCC for developer buidls

2016-03-03 Thread Axel Hecht

On 03/03/16 01:57, Jeff Gilbert wrote:

On Wed, Mar 2, 2016 at 3:45 PM, Mike Hommey  wrote:

More importantly, changing the official toolchain has implications on
performance.


Sorry, I meant for general automation. Our final spins (especially
LTO/PGO builds) should remain whatever gives us maximum perf. (not
making any claims myself here!)

Our PGO/LTO builds can take 10x+ what our normal integration builds
take if it nets us a few percentage points of runtime perf.

I suppose it becomes a question of divergence between fast-building
builds and 'final' PGO/fully-optimized builds. We already have this to
some extent with PGO vs non-PGO builds.



This is gonna conflict with the release-promotion work that releng is 
doing now. We stop doing dedicated builds for at least beta now, and 
instead just take a known good CI build, and ship it.


I also think that we should keep our CI builds close to what we intend 
to ship, for compiler/compiler-dependent bugs.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: rr chaos mode update

2016-02-16 Thread Axel Hecht

On 16/02/16 03:15, Kyle Huey wrote:

Seems like a good thing to expect developers to do locally today.


Two concerns:

What's the successs criteria here?

Also, speaking as an occasional code contributor, newcomers and folks 
like me will probably give up on contributing patches earlier.


Axel



- Kyle

On Mon, Feb 15, 2016 at 6:08 PM, Justin Dolske  wrote:


On 2/14/16 9:25 PM, Bobby Holley wrote:

How far are we from being able to use cloud (rather than local) machine

time to produce a trace of an intermittently-failing bug? Some one-click
procedure to produce a trace from a failure on treeherder seems like it
would lower the activation energy significantly.



And with that... At some point, what about having all *new* tests be
battle-tested by X runs of rr-chaos testing?  If it passes, it's allowed to
run in the usual CI automation. If it fails, it's not (and you have a handy
recording to debug).

Justin

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bug Program Next Steps

2016-01-31 Thread Axel Hecht

Hi,

I'd like to start my feedback with a request.

It'd help me to get a big-picture of the stuff that surrounds this 
email. Things I'd like to see is information about who's been consulted 
going in to this. Also, which threads about bug lifecycle got looked at.
It'd also be nice to see how this one should play into more 
forward-looking work.
I know that I wasn't consulted because I didn't insist. I did have the 
chance, though. Others might feel easier if they had similar insight.
Also, the question about "is this the problem to solve" benefit from 
context. This one might just be a dependency of the ideas on how to get 
to more heavy stuff.


And now to remember all that when I write my own heavy-weight stuff soon.

More details inline.


On 30/01/16 00:45, Emma Humphries wrote:


Bug Program Next Steps

Over the last week, I’ve asked you to step up and identify developers 
who will be responsible for bugs triaged into their component (in 
Firefox, Core, Toolkit, Fennec iOS, and Fennec Android.)



  Why This Matters

Bugs are a unit of quality we can use to see how we’re doing.


We believe that the number of outstanding, actionable bugs is the best 
metric of code quality available, and that how this number changes 
over time will be a strong indicator of the evolving quality of 
Firefox. Actionablebugs are those with the status NEW for which the 
next action required - if any - is unambiguous: does the bug need 
immediate attention, will it be worked on for a future release, will 
it be worked on at all?



There are two parts to maintaining the value of that metric. First, we 
want to make better assertions about the quality of our releases by 
making clear decisions about which bugs must be fixed for each release 
(urgent) and actively tracking those bugs. The other is the number of 
good bugs filed by our community. Filing a bug report is a gateway to 
greater participation in the Mozilla project, and we owe it to our 
community to make quick and clear decisions about each bug.



Making decisions on new bugs quickly helps us avoid point releases, 
and gives positive feedback to people filing bugs so that they will 
file more good bugs.


There's a school of thought that values a non-actionable bug over a bug 
not filed.


I know that for my personal crashers, for example, I look at stack 
traces, and have no clue what could be going wrong there. Or how I could 
provide value in figuring it out.


I do force myself to file bugs on them at times, though. 'Cause if I 
don't file 'em, nobody will, and then there's even less chance to figure 
something out.


I think this is a concern beyond crashes, as we struggle to find a 
balance between not having insight on how things work, and oth, a 
gazillion of useless bugs that just say "doesn't work".



  What’s Your Role

Starting in the second quarter of this year, if you’ve taken on a 
component, I’m expecting you or your team to look at the bugs which 
landed in the component on a more frequent basis than a weekly triage.



In February, we’re starting a pilot with four groups of components 
where we’ll get the process changes and field tested, so that we can 
we can take the changes to all the affected bugzilla components for 
review and comment before we implement them across all of the work on 
Firefox.



How are those four groups chosen?



Hold on, we already have a weekly triage!

That’s fantastic, but a weekly pace means we miss bugs that affect 
upcoming releases. So I’m expecting you to scan that list of inbound 
bugs daily for the urgent ones (I’ll define urgent below) and put them 
into one of the states described in the next section, the others can 
go into your regular triage.



At Your Regular Triage

You’ll look at the bugs which landed in your component and decide on 
how to act on them using the states described in the next section.



We don’t have a regular triage

This is a process which you’ll need to start, and the bug program team 
will help with this.



This is potentially a lot of work for one person

Looking at the urgent bugs does not have to be one person’s task. You 
can have a rotation of people doing this. Look at the Core::Graphics 
triage wiki for 
an example of what you could be doing.



  Bug States

Initially, these states will be marked in bugzilla.mozilla.org 
 using whiteboard tags during the pilot. 
The bugzilla team will be making further changes once we’ve settled on 
a process.



You’ll be looking at bugs in your component as they land, in your 
component. We expect most of these will be NEW bugs, but some will be 
in UNCONFIRMED.



There are four states you’ll need to decide to put each bug, and in 
your reviews between your team’s weekly triages, we want you to be on 
the watch for bugs with characteristics which make getting it in front 
of someone urgent: these are bugs with crash, topcrash, 

Re: Just Autoland It

2016-01-26 Thread Axel Hecht

Piling on:

I'm using mozreview mostly as an occasional patch author:

Plus, I can schedule a try build. Minus, I need to bother the reviewer 
with a published request in order to do so. Resorted to add yet another 
hg extension to my local .hg/hgrc.


My most frequent concern is that bugzilla and mozreview use jargon and 
UX flows that have nothing in common. I don't think that either are good 
or better in their own right, too. And the mapping of one to the other 
isn't documented. "I want to cancel a review, or r-" doc is non-existent 
to hard-to-find. I just randomly click buttons.


Which is basically what I do whenever I want to do something. I have a 
clear idea and intention on what I want to show up on bugzilla, but not 
on what to do on reviewboard to get there. Which might just be a 
category of documentation that's not written yet. Why I consider that to 
be a problem is gonna be in a separate reply to a different post on this 
thread.


Axel

On 25/01/16 13:26, Honza Bambas wrote:

Writing both as a patch author and a reviewer as well.

- as a patch author I want a full control on when the patch actually
lands (dependencies, any other timing reasons, that only the author
knows the best), "Don't land yet" comment somewhere will easily be
overlooked
- as a reviewer I don't want to bare the decision to land or not to
land, at all
- I want to preserve the mechanism "r+ with comments", which means to
let the patch be landed by the author after updated (means reviewer
doesn't need to look over it again)
- as an author I want to write comments about the patch (about the
structure, what it does, why) to make the review simpler for the
reviewer ; commit message description may not be the best place, right?
- will it be possible to still be using hg patch queues?
- I (and others too) am not fun of MozReview UI.  As a reviewer I found
it very hard to orient in it:
- what is the difference between [Reviews] and [Diff] tab? what is
exactly it's content
- where exactly to click to start a reivew of a patch I want to
review now?  Is in the "Commits" table?  And is it under "Diff" or
"Reviews"?
- how can I mark hunks/files are already reviewed (something I like
on Splinter)?
- how can I see only a single file diff and easily navigate between
files? (as in Splinter)
- few weeks ago I didn't even know how to give an r+!!  it's hidden
under the [Finish review...] *tab*?
- simply said: the UI is everything but self-explanatory and highly
unfriendly, until that is fixed I'm not much willing to use MozReview
mainly as a reviewer

-hb-


On 1/22/2016 3:35, Gregory Szorc wrote:

If you have level 3 source code access (can push to central, inbound,
fx-team) and have pushed to MozReview via SSH, as of a few weeks ago you
can now land commits from the "Automation" drop down menu on MozReview.
(Before only the review request author could trigger autoland.)

This means that anyone [with permissions] can land commits with a few
mouse
clicks! It will even rewrite commit messages with "r=" annotations
with the
review state in MozReview. So if someone does a drive-by review, you
don't
have to update the commit message to reflect that reviewer. Neato!

I've gotten into the habit of just landing things if I r+ them and I
think
they are ready to land. This has startled a few people because it is a
major role reversal of how we've done things for years. (Typically we
require the patch submitter to do the landing.) But I think
reviewer-initiated landing is a better approach: code review is a gate
keeping function so code reviewers should control what goes through the
gate (as opposed to patch authors [with push access] letting themselves
through or sheriffs providing a support role for people without push
access). If nothing else, having the reviewer land things saves time: the
ready-to-land commit isn't exposed to bit rot and automation results are
available sooner.

One downside to autoland is that the rebase will happen remotely and your
local commits may linger forever. But both Mercurial and Git are smart
enough to delete the commits when they turn into no-ops on rebase. We
also
have bug 1237778 open for autoland to produce obsolscence markers so
Mercurial will hide the original changesets when you pull down the
rebased
versions. There is also potential for some Mercurial or Git command magic
to reconcile the state of MozReview with your local repo and delete local
commits that have been landed. This is a bit annoying. But after
having it
happen to me a few times, I think this is a minor annoyance compared
to the
overhead of pulling, rebasing, rewriting commit messages, and pushing
locally, possibly hours or days after review was granted.

I encourage my fellow reviewers to join me and "just autoland it" when
granting review on MozReview.

gps
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform






Re: Improving blame in Mercurial

2015-12-12 Thread Axel Hecht

On 12/11/15 10:46 PM, Joshua Cranmer  wrote:

On 12/11/2015 5:17 PM, Gregory Szorc wrote:

If you have ideas for making the blame/annotate functionality better,
please capture them at https://www.mercurial-scm.org/wiki/BlamePlan or
let
me know by replying to this message. Your feedback will be used to drive
what improvements Mercurial makes.


A "reverse blame" feature that shows when a line in an old revision was
deleted or changed in a newer revision is something I've desperately
wanted.


I just recently successfully used `hg grep --all` for that.

Axel


(Relatedly, I know a lot of you want a Mercurial repo with CVS history to
facilitate archeology. I hope to have that formally established in Q1.
Stay
tuned.)


Are you planning on letting comm-central attach to the CVS history as well?



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using the Taskcluster index to find builds

2015-12-02 Thread Axel Hecht

On 12/1/15 3:48 PM, Chris AtLee wrote:

Localized builds should be at e.g.
gecko.v2.mozilla-central.latest.firefox-l10n.win32-opt

And yes, once we've got the naming structure nailed down, wget-en-US should
change to use the index.


I would expect l10n nightlies to be under nightly?

How does one distinguish nightlies from non-nightlies under 
mozilla-central.latest? Assuming that nightlies might end up there on 
occasion?


Axel



On Tue, Dec 1, 2015 at 5:22 AM, Axel Hecht <l...@mozilla.com> wrote:


I haven't found localized builds and their assets by glancing at things.
Are those to come?

Also, I suspect we should rewrite wget-en-US? Or add an alternative that's
index-bound?

Axel

On 11/30/15 9:43 PM, Chris AtLee wrote:


The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
work behind the scenes over the past few months to migrate the backend
storage for builds from the old "FTP" host to S3. While we've tried to
make
this as seamless as possible, the new system is not a 100% drop-in
replacement for the old system, resulting in some confusion about where to
find certain types of builds.

At the same time, we've been working on publishing builds to the
Taskcluster Index [1]. This service provides a way to find a build given
various different attributes, such as its revision or date it was built.
Our plan is to make the index be the primary mechanism for discovering
build artifacts. As part of the ongoing buildbot to Taskcluster migration
project, builds happening on Taskcluster will no longer upload to
https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut
off
platforms in buildbot, the index will be the only mechanism for
discovering
new builds.

I posted to planet Mozilla last week [2] with some more examples and
details. Please explore the index, and ask questions about how to find
what
you're looking for!

Cheers,
Chris

[1] http://docs.taskcluster.net/services/index/
[2]
http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using the Taskcluster index to find builds

2015-12-01 Thread Axel Hecht
I haven't found localized builds and their assets by glancing at things. 
Are those to come?


Also, I suspect we should rewrite wget-en-US? Or add an alternative 
that's index-bound?


Axel

On 11/30/15 9:43 PM, Chris AtLee wrote:

The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
work behind the scenes over the past few months to migrate the backend
storage for builds from the old "FTP" host to S3. While we've tried to make
this as seamless as possible, the new system is not a 100% drop-in
replacement for the old system, resulting in some confusion about where to
find certain types of builds.

At the same time, we've been working on publishing builds to the
Taskcluster Index [1]. This service provides a way to find a build given
various different attributes, such as its revision or date it was built.
Our plan is to make the index be the primary mechanism for discovering
build artifacts. As part of the ongoing buildbot to Taskcluster migration
project, builds happening on Taskcluster will no longer upload to
https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut off
platforms in buildbot, the index will be the only mechanism for discovering
new builds.

I posted to planet Mozilla last week [2] with some more examples and
details. Please explore the index, and ask questions about how to find what
you're looking for!

Cheers,
Chris

[1] http://docs.taskcluster.net/services/index/
[2] http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why do we flush layout in nsDocumentViewer::LoadComplete?

2015-11-26 Thread Axel Hecht

On 11/27/15 4:09 AM, Robert O'Callahan wrote:

On Fri, Nov 27, 2015 at 3:59 PM, Boris Zbarsky  wrote:


On 11/26/15 9:24 PM, Robert O'Callahan wrote:


We've always done it, but I can't think of any good reasons.



I've tried to fix this in the past and ran into two problems.

The first problem was that some tests failed as a result.  This is
somewhat minor, really.

The second problem, pointed out by the first, is that some tests stopped
testing what they mean to be testing, because all of our reftests and
crashtests assume layout gets flushed onload, so they can test dynamic
behavior by doing stuff after that.

See https://bugzilla.mozilla.org/show_bug.cgi?id=581685 for details.  I
haven't had a chance to get back and really figure this out, though we
should.



Mmmm. This could be a significant win!

Rob



I wonder, how much of the web could rely on this, given our tests do?

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merge moved to Thursday 29th

2015-10-29 Thread Axel Hecht

I'm also commented in the bug:

If we're doing uplifts, I'm not sure we're winning by uplifting 
pre-landed strings.


Either way, I think the risk assessment of the patch should be that of 
the actual patch that uses the strings, not "just adding strings".


Also, I'd appreciate an ETA for the patch, as that's also going in to 
the risk that a patch comes with.


Axel

On 10/29/15 1:41 PM, Sylvestre Ledru wrote:

Please request the uplift. Under specific circumstances (like this one), we 
take string changes in aurora.

Thanks
Sylvestre

Le 29/10/2015 13:39, Masatoshi Kimura a écrit :

I missed two commits for 44 branch.
https://treeherder.mozilla.org/#/jobs?repo=fx-team=6b3c99e54177
Uplift requests will not help because they are string changes.

On 2015/10/29 5:10, Sylvestre Ledru wrote:

Hello,

Because we want to synchronize the release of 42 and 44 devedition (next
Tuesday),
we are planning to perform the merge tomorrow, Thursday.
As a consequence, nightly = 45, aurora = 44 and beta = 43 (42 is already
in release).
This will give us enough time to validate the first aurora build.

I apologize for the very late notice. Of course, we will be friendly
with uplift requests.

Sylvestre


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Default window size on a screen of 1920x1080 or 1280x1024?

2015-09-16 Thread Axel Hecht

Hi,

we're trying to find out what the default window size would be for 
people on screens of 1920x1080 or 1280x1024.


Sadly, I can't find the code that actually computes that for the heck of 
it, can anybody help?


Background: We want to ensure that the new about:privatebrowsing has the 
panels stacked side-by-side, but that depends on the window size. So 
we're trying to find out common window sizes for people. Some 
conversation in https://bugzilla.mozilla.org/show_bug.cgi?id=1198287


Thanks

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default window size on a screen of 1920x1080 or 1280x1024?

2015-09-16 Thread Axel Hecht

Thanks, exactly what I was looking for.

Axel

On 9/16/15 7:13 PM, Gavin Sharp wrote:

See 
https://hg.mozilla.org/mozilla-central/annotate/3e8dde8f8c17/browser/base/content/browser.js#l1017
if you're wondering about Firefox specifically.

Gavin

On Wed, Sep 16, 2015 at 7:26 AM, Axel Hecht <l...@mozilla.com> wrote:

Hi,

we're trying to find out what the default window size would be for people on
screens of 1920x1080 or 1280x1024.

Sadly, I can't find the code that actually computes that for the heck of it,
can anybody help?

Background: We want to ensure that the new about:privatebrowsing has the
panels stacked side-by-side, but that depends on the window size. So we're
trying to find out common window sizes for people. Some conversation in
https://bugzilla.mozilla.org/show_bug.cgi?id=1198287

Thanks

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Largest chunks of code that are likely to be removable?

2015-06-30 Thread Axel Hecht

On 6/30/15 9:13 AM, Mike Hommey wrote:

On Mon, Jun 29, 2015 at 11:19:08PM -0700, Nicholas Nethercote wrote:

Hi,

I'm wondering what the largest chunks of code there are in the
codebase that are candidates for removal, i.e. probably with a bit of
work but not too much.

One that comes to mind is rdf/ (see
https://bugzilla.mozilla.org/show_bug.cgi?id=1176160#c5) though I
don't have a good understanding of how much stuff depends on it, even
having seen https://bugzilla.mozilla.org/show_bug.cgi?id=420506.


See the dependencies of bug 833098.

Mike



Note, that bug has the dependencies to move rdf/ from mozilla-central 
into comm-central. mail has many more dependencies on RDF, I think.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-09 Thread Axel Hecht

I recall that at least one group actively uses votes to prioritize stuff.

I can't really tell which one, I'm leaning towards devtools, but I don't 
have any data to back that up.


I mostly remember because I was surprised.

Also, for a component like devtools, I can see how it'd make sense.

Axel

On 6/10/15 12:09 AM, Mark Côté wrote:

In a quest to simplify both the interface and the maintenance of
bugzilla.mozilla.org, we're looking for features that are of
questionable value to see if we can get rid of them.  As I'm sure
everyone knows, Bugzilla grew organically, without much of a road map,
over a long time, and it experienced a lot of scope bloat, which has
made it complex both on the inside and out.  I'd like to cut that down
at least a bit if I can.

To that end, I'd like to consider the voting feature.  While it is
enabled on a quite a few products, anecdotally I have heard
many times that it isn't actually useful, that is, votes aren't really
being used to prioritize features  fixes.  If your team uses voting,
I'd like to talk about your use case and see if, in general, it makes
sense to continue to support this feature.

Thanks,
Mark



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozillabuild 2.0 ready for testing

2015-03-09 Thread Axel Hecht

Hi Ryan,

good news.

One thing that's a bit unfortunate from the l10n perspective is the svn drop. 
SVN is still used quite frequently to host the website localizations, so 
keeping that in would be helpful.

Axel, who should probably give this a test run in reals on his VM.

On 3/9/15 2:44 AM, Ryan VanderMeulen wrote:

For the past many months, I have been working on some major updates to the
Mozillabuild package in order to make it more developer-friendly and
easily-maintainable in the future. I am proud to say that at this time, it
is ready for more widespread testing.

The latest test build can be downloaded from the link below:
http://people.mozilla.org/~rvandermeulen/MozillaBuildSetup2.0.0pre4.exe
sha1sum: 9edb5c51bdb5f9a5d86baf5046967d8940654353

Release notes are available below. A few general notes:
* It is strongly advised that you *NOT* install this over a previous 1.x
install. Changes have been made to the directory structure and underlying
packages that could result in unexpected results should you choose to do so.
* As is always the case when updating Mozillabuild, you should clobber
after installing.
* Bugs that you come across can be filed in the mozilla.org::MozillaBuild
component.

My goal is to ship the final release in two weeks, so any feedback you can
provide now would be welcome!

Thanks,
Ryan

-

SIGNIFICANT CHANGES
* Added support for MSVC2015 and dropped support for MSVC 2013, WinSDK
8.1, and MSVC Express Edition.
   - MSVC Community Edition is now the recommended free compiler option
* Added minTTY 1.1.3 and enabled it as the default console.
   - Windows cmd.exe can be used by default by removing the 1 from |SET
USE_MINTTY=1| near the top of start-shell.bat
* Overhauls to the start-msvc* batch scripts that improve consistency and
simplify maintenance.
   - To launch a plain terminal with no MSVC path setting, use
start-shell.bat (was start-shell-l10n.bat in previous releases)
* Updated Mercurial to version 3.3.2 and switched to the native python
version.
   - Allows extensions like bzexport that rely on previously-unavailable
python components to work correctly.
   - Enables faster future updates in the event of serious bugs or security
issues.
   - Enabled extensions: blackbox, color, histedit, mq, pager, progress,
purge, rebase, share, transplant
   - See the Known Issues section for regressions from this change.
* Updated python to version 2.7.9.
   - Included packages: pip 6.0.8, setuptools 14.0, virtualenv 12.0.7

OTHER UPDATES/ADDITIONS/REMOVALS
* Removed SVN
* Updated 7-zip to version 9.20
* Updated bundled CA certs to those from NSS 3.17.4
* Updated emacs to version 24.4
* Updated MSYS to version 1.0.18 and various components to the latest
available for MSYS1
* Updated wget to version 1.16.1

KNOWN ISSUES
* Changes in behavior due to using minTTY instead of Windows cmd.exe
* Problems with python scripts that can't find an hg executable in the path
(bug 1128586, bug 1131048)



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Testing for expected crashes in both C++ and js, for printf checking

2015-02-17 Thread Axel Hecht

Hi,

I'd like to write tests to validate my assumptions around what's an error and 
what's a warning for localized values going into nsTextFormatter::smprintf.

Basically, the tests would start with a reference string, and then a more or 
less random modification of that string, and a check if the segments are in, or 
if it crashes [1].

So I'll need a .cpp core, and a wrapper that feeds it data and checks the 
output.

Any suggestions on how to do that right?

Axel

[1] '%f' being the reference string, '%S' being the localization, pass in 5.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating localstore.rdf

2014-08-05 Thread Axel Hecht

On 8/4/14 9:55 PM, Benjamin Smedberg wrote:


On 7/22/2014 8:47 AM, Roberto Agostino Vitillo wrote:

Localstore.rdf will soon be replaced with a json store (see Bug
559505). I am currently planning to leave the localstore.rdf
implementation as it is and issue a warning when a client tries to
access to it. This is needed as some add-ons seem still to rely on it.
We could use some Telemetry probes to see effectively how many add-ons
are still using the rdf store once the patch lands.

Are there any objections or remarks to the deprecation of localstore.rdf?


This does involve a one-time import of localstore data into the new
format, correct?

I'm happy that we are doing this. I *believe* that this may be the last
client of the RDF code in Firefox, which may allow us to remove RDF from
Firefox in a future release. Do you already have an addon validation
warning about addons using localstore?

--BDS



How much of XUL templates with RDF do we support still? Never kept track 
of templates to start with :-/


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: .properties and a single \u

2014-07-17 Thread Axel Hecht
gandalf, stas, and I talked about this a bit, and we intend to settle 
with my mental parser plus a compare-locales warning:


Broken escapes just pass through, \u - u, but we'll extend 
compare-locales to issue a warning on the l10n dashboard in those cases.


The work is tracked in https://bugzilla.mozilla.org/show_bug.cgi?id=1040019.

Axel

On 7/11/14 4:26 PM, Axel Hecht wrote:

Hi,

in .properties

   foo = some \unicode
   bar = some \a

creates the most icky output.

I'd like to get a defined behavior, but it turns out to be hard.

Java:
- dies with a parsing error on foo, bar is some a

XPCOM:
- returns some , as \u is converted to \0 on foo, bar is some a

Gaia:
- returns some \unicode, some \a
(doesn't drop unknown escapes apparently)

Compare-locales:
- dies with a python error on parsing in my code on foo, bar is some a

My mental parser:
- returns some unicode, some a


I dislike both java and compare-locales, and I think that both gaia and
xpcom don't work great.

What's your take?

Axel

cross-posting to .platform for xpcom, .tools.l10n for gaia's l10n.js


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


.properties and a single \u

2014-07-11 Thread Axel Hecht

Hi,

in .properties

  foo = some \unicode
  bar = some \a

creates the most icky output.

I'd like to get a defined behavior, but it turns out to be hard.

Java:
- dies with a parsing error on foo, bar is some a

XPCOM:
- returns some , as \u is converted to \0 on foo, bar is some a

Gaia:
- returns some \unicode, some \a
(doesn't drop unknown escapes apparently)

Compare-locales:
- dies with a python error on parsing in my code on foo, bar is some a

My mental parser:
- returns some unicode, some a


I dislike both java and compare-locales, and I think that both gaia and 
xpcom don't work great.


What's your take?

Axel

cross-posting to .platform for xpcom, .tools.l10n for gaia's l10n.js
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Where is the document for jar.mn file format

2014-07-02 Thread Axel Hecht

On 7/2/14 12:25 PM, Yonggang Luo wrote:

I am using Mozilla XUL SDK to build my own application,
So I'd like to know what's the format of jar.mn file



Took me a while to find it, but I think that 
https://ci.mozilla.org/job/mozilla-central-docs/Tree_Documentation/buildsystem/jar-manifests.html 
is the place to look.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spring cleaning: Reducing Number Footprint of HG Repos

2014-03-27 Thread Axel Hecht

On 3/27/14, 12:53 AM, Taras Glek wrote:

*User Repos*
TLDR: I would like to make user repos read-only by April 30th. We should
archive them by May 31st.

Time  spent operating user repositories could be spent reducing our
end-to-end continuous  integration cycles. These do not seem like
mission-critical repos, seems like developers would be better off
hosting these on bitbucket or github. Using a 3rd-party host has obvious
benefits for collaboration  self-service that our existing system will
never meet.

We are happy to help move specific hg repos to bitbucket.

Once you have migrated your repository, please comment in
https://bugzilla.mozilla.org/show_bug.cgi?id=988628so we can free some
disk space.


I think it's utterly sad making that we're giving up on hosting, instead 
of fixing it.


I have several things in my user repos that only run on our hg server, 
mostly because all other repo hoster don't send correct mimetypes for 
raw files. In particular this affects dashboards I created to share 
aggregated bugzilla data etc.


I'm also sad that we're removing the ability for contributors to share 
their mozilla-central clones, at least in large parts of the world. 
Pushing a full clone to some random server just isn't working for large 
parts of teh world.


And all that while the opportunity for us to help you on the data 
consumption is just broken.


Sad.

Note, strategically, I think that mozilla needs to support developing o 
the web, and the github editor isn't it. It'll be web-based IDEs, which 
require good tooling and hosting to be on the same infrastructure.


Axel



*Non-User Repos*
There  are too many non-user repos. I'm not convinced we should host
ash, oak, other project branches internally. I think we should focus on
mission-critical repos only. There should be less than a dozen of those.
I would like to stop hosting non-mission-critical repositories by end of
Q2.

This is a soft target. I don't have a concrete plan here. I'd like to
start experimenting with moving project branches elsewhere and see where
that takes us.

*What my hg repo needs X/Y that 3rd-party services do not provide?*
If you have a good reason to use a feature not supported by
github/bitbucket, we should continue hosting your repo at Mozilla.

*Why Not Move Everything to Github/Bitbucket/etc?*
Mozilla  prefers to keep repositories public by-default. This does not
fit  Github's business model which is built around private repos.
Github's free  service does not provide any availability guarantee.
There is also a problem of github not supporting hg.

I'm not completely sure why we can't move everything to bitbucket. Some
of  it is to do with anecdotal evidence of robustness problems. Some of
it is lack of hooks (sans post-receive POSTs).Additionally, as with
Github there is no availability guarantee.

Hosting arbitrary Moz-related hg repositories does not make strategic
sense. We should do the absolute minimum(eg http://bke.ro/?p=380)
required to keep Firefox shipping smoothly and focus our efforts on
making Firefox better.


Taras


ps. Footprint stats:

*Largest User Repos Out Of ~130GB*
1.1Gdmt.alexandre_gmail.com
1.1Gjblandy_mozilla.com
1.1Gjparsons_mozilla.com
1.2Gbugzilla_standard8.plus.com
1.2Gmbrubeck_mozilla.com
1.2Gmrbkap_mozilla.com
1.3Gdcamp_campd.org
1.3Gjst_mozilla.com
1.4Gblassey_mozilla.com
1.4Ggszorc_mozilla.com
1.4Giacobcatalin_gmail.com
1.5Gcpearce_mozilla.com
1.5Ghurley_mozilla.com
1.6Gbsmedberg_mozilla.com
1.6Gdglastonbury_mozilla.com
1.6Gdtc-moz_scieneer.com
1.6Gjlund_mozilla.com
1.6Gsarentz_mozilla.com
1.6Gsbruno_mozilla.com
1.7Gmshal_mozilla.com
1.9Gmhammond_skippinet.com.au
2.1Glwagner_mozilla.com
2.4Garmenzg_mozilla.com
2.4Gdougt_mozilla.com
2.5Gbschouten_mozilla.com
2.7Ghwine_mozilla.com
2.8Geakhgari_mozilla.com
2.8Gmozilla_kewis.ch
2.9Grcampbell_mozilla.com
3.1Gbhearsum_mozilla.com
3.1Grjesup_wgate.com
3.2Gagal_mozilla.com
3.3Gaxel_mozilla.com
3.3Gprepr-ffxbld
4.2Gjford_mozilla.com
4.3Gmgervasini_mozilla.com
4.6Glsblakk_mozilla.com
5.0Gbsmith_mozilla.com
5.5Gnthomas_mozilla.com
5.8Gcoop_mozilla.com
6.5Gjhopkins_mozilla.com
7.7Graliiev_mozilla.com
9.2Gcatlee_mozilla.com
13Gstage-ffxbld

*Space Usage by Non-user repos ~100GB*
24K integration/gaia-1_4
28K addon-sdk
28K projects/collusion
32K integration/gaia-1_1_0
32K projects/emscripten
32K projects/Moz2D
32K releases/mozilla-b2g18_v1_1_0
144Kprojects/addon-sdk-jetperf-tests
268Kipccode
452Ktestpilot-l10n
500Kreleases/firefox-hotfixes
700Kprojects/python-nss
896Kschema-validation
1.2Mprojects/mccoy
1.4Mpyxpcom
2.4Mplatform-model
2.4M  

Re: How to efficiently walk the DOM tree and its strings

2014-03-03 Thread Axel Hecht

Hi,

translating DOM is a bit funky. Generally, you can probably translate 
block elements one by one, but you need to persist inline elements.


You should mark up the inline elements in the string that you send to 
the translation engine, such that you can support inline markup changing 
the order.


Something like

You would think the a href=foofunkyness/a would strongrule/rule.

could translate into

strongRuling/strong would be the a href=foofunkyness/a, you 
would think.


Are you intending to also localize tooltips and the like?

Axel


On 3/3/14, 8:28 PM, Felipe G wrote:

Hi everyone, I'm working on a feature to offer webpage translation in
Firefox. Translation involves, quite unsurprisingly, a lot of DOM and
strings manipulation. Since DOM access only happens in the main thread, it
brings the question of how to do it properly without causing jank.

This is the use case that I'm dealing with in bug 971043:

When the user decides to translate a webpage, we want to build a tree that
is a cleaned-up version of the page's DOM tree (to remove nodes that do not
contain any useful content for translation; more details in the bug for the
curious). To do this we must visit all elements and text nodes once and
decide which ones to keep and which ones to throw away.

One idea suggested is to perform the task in chunks to let the event loop
breathe in between. The problem is that the page can dynamically change and
then a tree representation of the page may no longer exist. A possible
solution to that is to only pause the page that is being translated (with,
say, EnterModalState) until we can finish working on it, while letting
other pages and the UI work normally. That sounds a reasonable option to me
but I'd like to hear opinions.

Another option exists if it's possible to make a fast copy of the whole
DOM, and then work on this snapshot'ed copy which is not live. Better yet
if we can send this copy with a non-copy move to a Worker thread. But it
brings the question if the snapshot'ing itself won't cause jank, and if the
extra memory usage for this is worth the trade-off.

Even if we properly chunk the task, it is still bounded by the size of the
strings on the page. To decide if a text node should be kept or thrown away
we need to run a regexp on it, and there's no way to pause that midway
through. And after we have our tree representation, it must be serialized
and encodeURIComponent'ed to be sent to the translation service.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing how build automation interacts with the tree

2014-03-02 Thread Axel Hecht

Hi,

I've watched you guys thinking for an hour ;-)

Some comments from me.

Yes to moving build flows that generate assets into the tree.
Yes to having a way for developers to reproduce what automation does.
Yes to having jobs being executed more on demand than on push, and 
having that have idempotent results.


Sceptical on the vision that we'll see the end of inbounds. The 
interactions between test results and rebase don't seem to be trivial 
enough to me to hope for non-backout always-open trees via auto land.


I'm having an 'oh noes' for single command called by automation. My main 
point here is the usefulness of logs generated. When you put all 
sequential and parallel tasks into one single wrapper process, you end 
up with one big log file on ftp, like today. And if anything happens, 
one needs to read that log and reverse engineer which characters in this 
log are stdout/stderr, and to which task they belong. I know I can't 
tell good from bad in our logs.


OTH, you could have all the structure of the process being exposed in 
the automation and its reporting. If something goes wrong, you can tell 
the location of the problem in the process right away, you can drill 
down to the process task, and its dependencies.


If I think of the problem, I'm thinking along these lines: Let's specify 
the process, as a DAG of serialized and parallized tasks, inside the 
tree, and have the automation run that as is (*). Offer developers a 
console-only hook to that fragment of the complete automation process, 
akin to integration tests.


* while using buildbot, parallel tasks would need to be executed 
sequentially. I read the recent posts by Taras et al that buildbot isn't 
a solid requirement going forward.


A few comments on mozharness. One of the earliest tasks it offered, 
IIRC, was multi-locale android builds. Sadly, it happens that it's not 
helping those developers that want to create and test multi-locale 
builds. It's monolithic deliverable isn't what developers need at the 
point when they test multi-locale builds, nor does it blend in to the 
developer's setup. Folks like rnewman were glad once I explained how to 
avoid using mozharness for their builds. To me that's a sign of an 
inadequate level of abstraction.


And, as it's been mentioned all over the call, l10n repacks:

Testing: repacks are hard to test, and they should be. They're designed 
to be infallible, so that, no matter what happens in a localization, 
they're producing runnable builds. A test is challenged to tell between 
a broken localization and a broken build system. We shouldn't 
overestimate the amount of errors in the build that end up in a build 
bustage, and which of those are actually test failures. And which are 
not generating build failures, but are bustages. One example would be 
broken locale merge dirs, for example. Anything can be in those, and the 
builds build and run fine. They're just not showing the right strings.


More generally, repacks are basically unowned at this point. There's a 
bit of ownership in build, in releng, and me, as to how they're done. 
There's absolutely nothing as far as reporting goes. The agreement 
between John and me was if there's anything odd, file a bug on releng 
to dig in.


That's as much as I can get out of my brain into writing, I wish I had 
an hour-long video to go back and forth about stuff ;-)


Axel

On 2/28/14, 9:48 PM, Gregory Szorc wrote:

(This is likely off-topic for many dev-platform readers. I was advised
to post here because RelEng monitors dev-platform and I don't like
cross-posting.)

The technical interaction between build automation and mozilla-central
has organically grown into something that's very difficult to maintain
and improve. There is no formal API between automation and
mozilla-central. As a result, we have automation calling into esoteric
or unsupported commands and make targets. Change is difficult because it
must be coordinated with automation changes. Build system maintainers
lack understanding of what is and isn't used in automation. It's
difficult to reproduce what automation does locally.

The current approach slows everyone down, leads to too-frequent breakage
(l10n repacks are a great example), and limits the efficiency of
automation.

I'm elated to state that at a meeting earlier today, we worked out a
solution to these problems! Full details are in bug 978211.

tl;dr we are going to marshal all interaction between automation and the
tree through a mach-like in-tree script. This script will establish a
clear, supported, and auditable API for automation tasks and will
establish a level of indirection allowing the tree to change without
requiring corresponding changes to automation.

Some of the benefits of this approach include:

* Abstracting the build backend away from automation. The tree will
choose GNU make, pymake, mozmake, Tup, etc depending on what it knows is
best. Currently, automation has {make, pymake, mozmake} hard-coded.


Re: W3C Proposed Recommendations: RDF 1.1

2014-01-14 Thread Axel Hecht

As, still, module owner of RDF, I think that's the right thing for us to do.

I haven't actually followed the development of the specs, but I'm 
positive that the development of those specifications doesn't impact us 
as a browser vendor. The impact of RDF is in the web application and 
addons system.


Axel

On 1/14/14 11:04 PM, L. David Baron wrote:

There are eight W3C Proposed Recommendations for RDF 1.1 (two of
which are actually Proposed Edited Recommendations):

RDF Schema 1.1: W3C Proposed Edited Recommendation 09 January 2014
 http://www.w3.org/TR/rdf-schema/
RDF 1.1 XML Syntax: W3C Proposed Edited Recommendation 09 January 2014
 http://www.w3.org/TR/rdf-syntax-grammar/
RDF 1.1 N-Quads: W3C Proposed Recommendation 09 January 2014
 http://www.w3.org/TR/n-quads/
RDF 1.1 N-Triples: W3C Proposed Recommendation 09 January 2014
 http://www.w3.org/TR/n-triples/
RDF 1.1 Concepts and Abstract Syntax: W3C Proposed Recommendation 09 January 
2014
 http://www.w3.org/TR/rdf11-concepts/
RDF 1.1 Semantics: W3C Proposed Recommendation 09 January 2014
 http://www.w3.org/TR/rdf11-mt/
RDF 1.1 TriG: W3C Proposed Recommendation 09 January 2014
 http://www.w3.org/TR/trig/
RDF 1.1 Turtle: W3C Proposed Recommendation 09 January 2014
 http://www.w3.org/TR/turtle/

There's a call for review to W3C member companies (of which Mozilla
is one) open until February 9.

If there are comments you think Mozilla should send as part of the
review, or if you think Mozilla should voice support or opposition
to the specification, please say so in this thread.  (I'd note,
however, that there have been many previous opportunities to make
comments, so it's somewhat bad form to bring up fundamental issues
for the first time at this stage.)

My inclination is to explicitly abstain to indicate this is
something we're not interested or involved in.

-David



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Axel Hecht

Hi,

two points of caution:

In the little version control archaeology I do, I hit breaks blame for 
no good reason pretty often already. I'd not underestimate the cost for 
the project of doing changes just for the sake of changes.


Tools don't get code right. I've seen various changes where tools 
reformat code they don't understand, and break it. Trailing whitespace 
is significant in some of our file formats, for example.


Axel

On 1/6/14 6:55 PM, Martin Thomson wrote:

I think that this is a good start, but it doesn’t go quite far enough.

Part of the problem with a policy that requires people to avoid reformatting of 
stuff they don’t touch is that it propagates formatting divergence.  Sometimes 
because it’s better to conform to the style immediately adjacent the changed, 
but more so because it prevents the use of tools that reformat entire files 
(here, I’m not talking about things like names, but more whitespace 
conventions).

For whitespace at least, I think that we need to do the following:

  1. pick a style

I really, really don’t care what this is.  I’m thinking that we pick whatever 
people think that the current style is and give folks a fixed period to debate 
changes.

  2. create some tools

These tools should help people conform to the style.

Primarily, what is needed is a tool with appropriate configuration that runs on 
the command line — e.g., mach reformat …  clang-format is looking like a good 
candidate for C/C++, it just needs a configuration.  For JavaScript, I’ve used 
the python js-beautify, but it’s pretty light on configuration options, it 
might need some hacking to make it better.

Ideally though, there should be set of configuration files for common editors.  
I’m certain there are plenty out there already.  Let’s bring them all together 
(attaching the files 
https://developer.mozilla.org/en-US/docs/Developer_Guide/Coding_Style might be 
enough).

  3. reformat everything

Take the command line tool and run it over all the code.  I realise that this 
is the contentious part, but you don’t get the real benefits until you do this.

Once you do this, it’s safe to reformat files at any time without messing with 
parts of files that you haven’t touched.  This is important because many tools 
only reformat entire files.

As for `hg blame` and related tools, I have a workaround for that.  I’ve 
knocked together a tool that takes a file and a reformatter command line, and 
churns out a series of patches that retain blame: 
https://github.com/martinthomson/blame-bridge

The patch bitrot problem is not easy to work around, but that will depend on 
how closely the affected files already conformed to the style guide.

4. enforce compliance

This is probably a step for the future, but if there was - for example - a 
commit bot that waited for a clean build+test run, adding a format check to 
that run would allow the bot to block patches that screwed up the formatting of 
files.

—Martin


On 2014-01-05, at 18:34, Nicholas Nethercote n.netherc...@gmail.com wrote:


We've had some recent discussions about code style. I have a propasal

For the purpose of this proposal I will assume that there is consensus on the
following ideas.

- Having multiple code styles is bad.

- Therefore, reducing the number of code styles in our code is a win (though
  there are some caveats relating to how we get to that state, which I discuss
  below).

- The standard Mozilla style is good enough. (It's not perfect, and it should
  continue to evolve, but if you have any pet peeves please mention them in a
  different thread to this one.)

With these ideas in mind, a goal is clear: convert non-Mozilla-style code to
Mozilla-style code, within reason.

There are two notions that block this goal.

- Our rule of thumb is to follow existing style in a file. From the style
  guide:

  The following norms should be followed for new code, and for Tower of Babel
  code that needs cleanup. For existing code, use the prevailing style in a
  file or module, or ask the owner if you are on someone else's turf and it's
  not clear what style to use.

  This implies that large-scale changes to convert existing code to standard
  style are discouraged. (I'd be interested to hear if people think this
  implication is incorrect, though in my experience it is not.)

  I propose that we officially remove this implicit discouragement, and even
  encourage changes that convert non-Mozilla-style code to Mozilla-style (with
  some exceptions; see below). When modifying badly-styled code, following
  existing style is still probably best.

  However, large-scale style fixes have the following downsides.

  - They complicate |hg blame|, but plenty of existing refactorings (e.g.
removing old types) have done likewise, and these are bearable if they
aren't too common. Therefore, style conversions should do entire files in
a single patch, where possible, and such patches should not make any
non-style changes. 

Re: Add-on File Registration PRD

2013-11-04 Thread Axel Hecht

On 11/4/13 9:41 AM, Onno Ekker wrote:

Jorge Villalobos wrote:

Cross posting to dev.planning, where I originally intended this to be.
Please follow up to dev.planning.

Jorge

On 10/30/13 3:42 PM, Jorge Villalobos wrote:

Hello!

As many of you know, the Add-ons Team, User Advocacy Team, Firefox Team
and others have been collaborating for over a year in a project called
Squeaky [1]. Our aim is to improve user experience for add-ons,
particularly add-ons that we consider bad for various levels of bad.

Part of our work consists on pushing forward improvements in Firefox
that we think will significantly achieve our goals, which is why I'm
submitting this spec for discussion:

https://docs.google.com/document/d/1SZx7NlaMeFxA55-u8blvgCsPIl041xaJO5YLdu6HyOk/edit?usp=sharing

The Add-on File Registration System is intended to create an add-on file
repository that all add-on developers need to submit their files to.
This repository won't publish any of the files, and inclusion won't
require more than passing a series of automatic malware checks. We will
store the files and generated hashes for them.

On the client side, Firefox will compute the hashes of add-on files
being installed and query the API for it. If the file is registered, it
can be installed, otherwise it can't (there is planned transition period
to ease adoption). There will also be periodic checks of installed
add-ons to make sure they are registered. All AMO files would be
registered automatically.

This system will allow us to better keep track of add-on IDs, be able to
easily find the files they correspond to, and have effective
communication channels to their developers. It's not a silver bullet to
solve add-on malware problems, but it raises the bar for malware developers.

We believe this strikes the right balance between a completely closed
system (where only AMO add-ons are allowed) and the completely open but
risky system we currently have in place. Developers are still free to
distribute add-ons as they please, while we get a much-needed set of
tools to fight malware and keep it at bay.

There are more details in the doc, so please give it a read and post
your comments and questions on this thread.

Jorge Villalobos
Add-ons Developer Relations Lead

[1] https://wiki.mozilla.org/AMO/Squeaky





Hi,

I have another use case which isn't clearly described by the current doc.

I have an English version of Firefox/Thunderbird installed with
additional language packs from
http://ftp.mozilla.org/pub/mozilla.org/%APP%/%CHANNEL%/%VERSION%/%OS%/xpi/.

After each update I have to manually add the language packs again.
Those files are created by Mozilla but aren't published to amo.

It would be a real shame if it wouldn't be possible anymore to add
different languages to your installation.

Onno



Most language packs are featured on AMO these days, please check 
https://addons.mozilla.org/En-us/firefox/language-tools/. They're pulled 
from ftp by a admin tool in amo before the release.


But yes, I'll actually need to read the original post with l10n in mind.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Add-on File Registration PRD

2013-11-04 Thread Axel Hecht

Hi,

read the thread now. I ignored it based on the subject, btw, didn't seem 
to affect anything in real life from just glancing at that.


I'd like to get langpacks excluded. Maybe we need to make their 
abilities to do stuff more robustly checked, but for a localizer wanting 
to test their work, this is really cumbersome. Note, the default config 
of those might even fail the malware checks, as the default identifies 
the author as mozilla.org.


For non-l10n questions:

We'd need developers to grant us a license to their code, right? Do we 
know for how many add-ons we'd actually need a license agreement that's 
not covered by the EULA of the add-on?


I think that the current proposals for developers and internal org 
add-ons are too course. The proposal seems to expose them to malware 
just for the sake of their own development or deployment.


I think that blocking the install is too hard for non-registered 
add-ons. A consistent UI that encourages users to uninstall 
non-registered add-ons might be all we need to get developers to 
register voluntarily.


Also, the just break the network path seems to be easy to get to for 
malware installed by .exe installers on windows, at least. Or at least 
be open to social engineering as much as dismissing a non-registered 
add-on UI.


Axel


On 10/30/13 10:55 PM, Jorge Villalobos wrote:

Cross posting to dev.planning, where I originally intended this to be.
Please follow up to dev.planning.

Jorge

On 10/30/13 3:42 PM, Jorge Villalobos wrote:

Hello!

As many of you know, the Add-ons Team, User Advocacy Team, Firefox Team
and others have been collaborating for over a year in a project called
Squeaky [1]. Our aim is to improve user experience for add-ons,
particularly add-ons that we consider bad for various levels of bad.

Part of our work consists on pushing forward improvements in Firefox
that we think will significantly achieve our goals, which is why I'm
submitting this spec for discussion:

https://docs.google.com/document/d/1SZx7NlaMeFxA55-u8blvgCsPIl041xaJO5YLdu6HyOk/edit?usp=sharing

The Add-on File Registration System is intended to create an add-on file
repository that all add-on developers need to submit their files to.
This repository won't publish any of the files, and inclusion won't
require more than passing a series of automatic malware checks. We will
store the files and generated hashes for them.

On the client side, Firefox will compute the hashes of add-on files
being installed and query the API for it. If the file is registered, it
can be installed, otherwise it can't (there is planned transition period
to ease adoption). There will also be periodic checks of installed
add-ons to make sure they are registered. All AMO files would be
registered automatically.

This system will allow us to better keep track of add-on IDs, be able to
easily find the files they correspond to, and have effective
communication channels to their developers. It's not a silver bullet to
solve add-on malware problems, but it raises the bar for malware developers.

We believe this strikes the right balance between a completely closed
system (where only AMO add-ons are allowed) and the completely open but
risky system we currently have in place. Developers are still free to
distribute add-ons as they please, while we get a much-needed set of
tools to fight malware and keep it at bay.

There are more details in the doc, so please give it a read and post
your comments and questions on this thread.

Jorge Villalobos
Add-ons Developer Relations Lead

[1] https://wiki.mozilla.org/AMO/Squeaky





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-17 Thread Axel Hecht

On 10/17/13 12:02 PM, Gervase Markham wrote:

On 16/10/13 16:02, Axel Hecht wrote:

We'll need to go down a path that works for Firefox OS.


With Firefox OS, we don't have the download-size issue, do we? So we can
ship all the data.

Gerv



We have issues with disk space, currently. We're already in the 
situation where all our keyboard data doesn't fit on quite a few of the 
devices out there.


Also, FOTA size matters a bit, though that's probably less of a problem.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-17 Thread Axel Hecht

On 10/16/13 5:39 PM, Jeff Walden wrote:

On 10/16/2013 02:10 PM, Axel Hecht wrote:

I wonder how far we can get by doing something along the lines we use for 
webfonts, starting to do the best we can with the data we already have, and 
improve once the perfect data is local.

Having the Intl.Foo algorithms returning different data over time is, IMO, even 
worse than deciding that certain locales are less important than others.  Aside 
from Math.random, of course, I can't think of anything in JS that has different 
behavior on the same inputs over time.

Jeff
For one, I don't think that's true for web.  You might think so in 
terms of stuff in the js specs, but the distinction between that and 
html5 and all kinds of server errors and timing differences is just theory.


More importantly, the impact of supporting a finite set of languages can 
easily be the nail in the coffin for the others. I don't think that's 
what mozilla stands for.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-17 Thread Axel Hecht

On 10/17/13 2:41 PM, Dao wrote:

On 16.10.2013 17:02, Axel Hecht wrote:

We'll need to go down a path that works for Firefox OS.


[...]


But, yes, I think we'll need a hosted service to provide that data on
demand in the end.


This sounds like a non-starter for mobile devices, doesn't it?


Well, it makes the implementation trickier.

Of course, telefonica just updated the phones from 1.0.1 to 1.1 in 
spain, over the air without charges, so the infrastructure is there.


It's an organizational effort to tie into that infrastructure. We'll 
need a reference implementation like we have with software update, and 
then get the our partner contacts in shape to explain how to do that on 
their side. Plus customizable hooks, of course.


And then, yes, we'd need to still disable the downloads, or make them 
really optional, if you're on roaming data or something. But software 
update can do that already, too, I suspect.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-17 Thread Axel Hecht

On 10/17/13 3:41 PM, Brian Smith wrote:

On Thu, Oct 17, 2013 at 3:46 AM, Axel Hecht l...@mozilla.com wrote:

We have issues with disk space, currently. We're already in the situation
where all our keyboard data doesn't fit on quite a few of the devices out
there.


Where can one read more about this? This ICU data is not *that* huge.
If we can't afford a couple of megabytes now on B2G then it seems like
we're in for severe problems soon. Isn't Gecko alone growing by
megabytes per year?


I wish there were docs and clear cuts. We've been in dire problems 
already, when our QA smoketest phones wouldn't get updates for days due 
to system.img being too large. And thus we didn't get QA to run tests.


These are the questions I asked last time, and don't have answers to:

- What exactly are the limiting sizes?
-- image size (per bootloader?)
-- disk partition size
--- at which point in time? user dependent?
--- can we have telemetry for this, if so?

I suspect we're talking about the joint size for gaia and gecko, but I'm 
not sure that's the case, or at least always the case. I.e., do we get a 
cookie if we move data from gaia into gecko?


There's probably more that I don't know, just because I don't know much 
about phones, and the various processes to get software on to them.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-16 Thread Axel Hecht

Jumping in late, so top posting.

I think being able to load language data dynamically is a good idea. I 
don't see a reason why this should be tied in to a language pack, 
though. The other way around is a different question. i.e.


language data doesn't include UI localization
UI localization should include language data

We have several multi-language products by now, those should work, in 
particular Firefox OS. We're doing quite a few things there that already 
duplicate language data. Much of that is in /shared, which isn't shared, 
but copied to many apps. Having that data inside gecko would actually 
get it to be shared.


I think much of the ICU data (which is technically CLDR data packed in 
ICU mostly) flows along similar lines of our hyphenation dictionaries. 
The web should just work, independent of which UI locale you're using.


I wonder how far we can get by doing something along the lines we use 
for webfonts, starting to do the best we can with the data we already 
have, and improve once the perfect data is local. I'm personally OK if 
this is a notification bar to reload, even.


Axel

PS: ICU is driven by js globalization api. That API was driven by MS and 
Google to get the data into their html app platforms. For mozilla, IMHO, 
the driver for g18n api should be Firefox OS, we're struggling to work 
around the lack of data for sorting, timezones, language data all around.


On 10/15/13 6:06 PM, Benjamin Smedberg wrote:

With the landing of bug 853301, we are now shipping ICU in desktop
Firefox builds. This costs us about 10% in both download and on-disk
footprint: see https://bugzilla.mozilla.org/show_bug.cgi?id=853301#c2.
After a discussion with Waldo, I'm going to post some details here about
how much this costs in terms of disk footprint, to discuss whether there
are things we can remove from this footprint, and whether the footprint
is actually worth the cost. This is particularly important because our
user research team has identified Firefox download weight as an
important factor affecting Firefox adoption and update rates in some
markets.

On-disk, ICU data breaks into the following categories:

* collation tables - 3.3MB

These are rules for sorting strings in multiple languages and
situations. See http://userguide.icu-project.org/collation for basic
background. These tables are necessary for implementing Intl.Collator.

The Intl.Collator API has methods to expose a subset of languages. It is
not clear from my reading of the specification whether it is expected
that browsers will normally ship with the full set of languages or only
the subset of the browser locale.

* currency tables - 1.9 MB

These are primarily the localized name of each currency in each
language. This is used by the Intl.NumberFormat API to format
international currencies.

* timezone tables - 1.7MB

Primarily the name of every time zone in each language. This data is
necessary for implementing Intl.DateTimeFormat.

* language data - 2.1 MB

This is a bunch of other data associated with displaying information for
a particular language: number formatting in various long and short
formats, calendar formats and names for the various world calendar systems.

==

Do we need this data for any language other than the language Firefox
ships in? Can we just include the relevant language data in each
localized build of Firefox, and allow users to get other language data
via downloadable language packs, similarly to how dictionaries are handled?

Is it possible that some of this data (the collation tables?) should be
in all Firefox locales, but other data (currency and timezone names) is
not as important and we can ship it only in one language?

As far as I can tell, the spec allows user agents to ship whatever
languages they need; the real question is what users and site authors
actually need and expect out of the API. (I'm reading the spec out of
http://wiki.ecmascript.org/doku.php?id=globalization:specification_drafts)

I am still working to get better number to quantify the costs in terms
of lost adoption for additional download weight.

Also, we are currently duplicating the data tables on mac universal
builds, because they are compiled-in symbols. We should clearly use a
separate file for these tables to avoid unnecessary download/install
weight. This is now filed as bug 926980.

--BDS




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-16 Thread Axel Hecht

On 10/16/13 3:50 PM, Gervase Markham wrote:

On 16/10/13 14:47, Anne van Kesteren wrote:

The API is synchronous so that seems like a bad idea.


As in, it'll cause the tab to freeze (one time only, when a new language
is called for) while the file is downloading? OK, that's bad, but so is
having Firefox be a lot bigger...

Perhaps, as Brian suggested, we should be looking at using the Windows
APIs and/or system ICU for some of this data, even if there are some
things for which we want to ship our own implementation.

Gerv



We'll need to go down a path that works for Firefox OS.

I think that being less-than-great at the first time you hit something 
off the main track is OK. We should see what actually happens with 
what's in the g18n apis now.


We'll likely also need a way to free excessive use of disk space, or DOS 
attacks by sneaking up little fragments of language content for 200 
languages or somesuch.


But, yes, I think we'll need a hosted service to provide that data on 
demand in the end.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-11 Thread Axel Hecht

On 10/11/13 2:47 PM, David Rajchenbach-Teller wrote:

I'd be happy if we could progressively kill FileUtils.jsm and make
nsIFile [noscript]. Don't know if this qualifies as platform feature,
though.

Cheers,
  David



Both are heavily used in the js build system for gaia, fwiw.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-10 Thread Axel Hecht

On 10/10/13 2:36 AM, Zack Weinberg wrote:

On 2013-10-09 12:01 PM, Gervase Markham wrote:

In the spirit of learning from this, what's next on the chopping block?


In between keep the C++ implementation and scrap entirely is
reimplement in JS, and I think that should be seriously considered for
things like XSLT where there's no question but what it increases our
attack surface, but there is also a strong (if small) constituency.
Where it is currently impossible to do something in JS, that points at a
weakness in the platform - whether capabilities or just speed.

In that vein, I think we should take a hard look at the image decoders.
Not only is that a significant chunk of attack surface, it is a place
where it's hard to innovate; image format after image format has died on
the vine because it wasn't *enough* of an improvement to justify the
additional glob of compiled code. Web-deliverable JS image decoders
could open that up.

The other thing that comes to mind is, if Web Components lives up to its
promise, perhaps it could be used for the built-in form controls? That's
also a big pile of hair, and form elements in odd places have been an
ongoing source of crasher bugs.

zw


I agree with the sentiment, but not on the eample.

Having been a peer of the XSLT module back in the days, we started with 
a rather js DOM like implementation, and moved over to a pure nsIContent 
etc impl, and each step there won us an order of magnitude in perf.


I don't think that XSLT is a good candidate for implementing it in JS.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-10 Thread Axel Hecht

On 10/10/13 2:43 PM, Jeff Walden wrote:

On 10/10/2013 02:27 PM, Axel Hecht wrote:

I agree with the sentiment, but not on the eample.

Having been a peer of the XSLT module back in the days, we started with a 
rather js DOM like implementation, and moved over to a pure nsIContent etc 
impl, and each step there won us an order of magnitude in perf.

But do we actually care about the perf of sites that use XSLT now, as long as 
perf isn't completely abysmal?  A utility company showing billing statements, I 
think we can slow down without feeling guilty.  But if, say, Google Maps or 
whichever used XSLT (I seem to remember *something* Google used it, forcing 
Presto to implement XSLT, back in the day -- maybe they've switched now, blink 
thread might say if I checked it), we might care.

Jeff
My point is, the perf was completely abysmal, and the key is to use 
nsINodeInfo for the xpath patterns instead of DOM localName and 
namespaceURI string comparisons. There's also a benefit from using the 
low-level atom-nsID-based content creation APIs.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-09 Thread Axel Hecht

On 10/9/13 6:18 PM, Boris Zbarsky wrote:

On 10/9/13 12:01 PM, Gervase Markham wrote:

In the spirit of learning from this, what's next on the chopping block?


RDF


Yes.

I think that localstore.rdf is the long pole. Not so much because we 
abuse it for xul persistance, that's OK to fix. The thing that bothers 
me most is all of those addons that probably still use it.


I'd love if we could get some data about that in particular, and RDF 
usage in addons in general.


And then there's mailnews, of course. That one's sad. Close, but we 
moved everyone off of mailnews just before it got rid of RDF, IIRC.


Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-05 Thread Axel Hecht

Hi,

out of curiousity, I recall that relativesrcdir was actually the trigger 
to switch on and off some l10n functionality in jar packaging.


Is that now on everywhere?

Axel

On 9/5/13 2:34 AM, Mike Hommey wrote:

Hi,

Assuming it sticks, bug 912293 made it unnecessary to start Makefile.in
files with the usual boilerplate:

   DEPTH = @DEPTH@
   topsrcdir = @top_srcdir@
   srcdir = @srcdir@
   VPATH = @srcdir@
   relativesrcdir = @relativesrcdir@

   include $(DEPTH)/config/autoconf.mk

All of the above can now be skipped. Directories that do require a
different value for e.g. VPATH or relativesrcdir can still place a value
that will be taken instead of the default. It is not recommended to do
that in new Makefile.in files, or to change existing files to do that,
but the existing files that did require such different values still do
use those different values.

Also, if the last line of a Makefile.in is:

   include $(topsrcdir)/config/rules.mk

That can be skipped as well.

Mike



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting the current release version

2013-08-30 Thread Axel Hecht

On 8/30/13 4:14 PM, Ed Morley wrote:

On 30 August 2013 15:09:08, Eric Shepherd wrote:

This could even be a place in the source code we could pull up a MXR
link and peel out of the code. I just don't know where in the code to
get it.


For platform:
https://hg.mozilla.org/releases/mozilla-release/file/tip/config/milestone.txt


For Firefox (and yeah currently the same as platform):
https://hg.mozilla.org/releases/mozilla-release/file/tip/browser/config/version.txt


Best wishes,

Ed


This is going to be off by one release for one week out of 6, though.

We're doing the beta-release migration at release minus a week and a 
day, i.e., we're migrating on Sept 9, and release on Sept 17:


https://mail.mozilla.com/home/ake...@mozilla.com/Release%20Management.html?view=monthdate=20130930

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting the current UI locale from C++

2013-08-29 Thread Axel Hecht

On 8/29/13 12:03 PM, Henri Sivonen wrote:

On Thu, Aug 29, 2013 at 10:12 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

How do I get the language code for the currently active UI language
pack from within Gecko C++ code in a way that works across desktop,
Android, B2G and Metro?


On IRC, I was pointed to
https://mxr.mozilla.org/comm-central/source/mozilla/editor/composer/src/nsEditorSpellCheck.cpp#762

Does that mechanism work on B2G and Android?



I'll read up on the other thread, and I still think the approach is 
wrong here, sorry.


But yes, getting the selected locale for the global package is what I 
try to keep working at cost, notably on Android, where we're already 
doing stunts to do that.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting the current UI locale from C++

2013-08-29 Thread Axel Hecht

On 8/29/13 3:17 PM, Anne van Kesteren wrote:

On Thu, Aug 29, 2013 at 1:48 PM, Axel Hecht l...@mozilla.com wrote:

I'll read up on the other thread, and I still think the approach is wrong
here, sorry.


You'll have to explain that more fully I think.

This is the current approach. However the current approach leads to
all kinds of bugs because localization teams don't have expertise in
this area. So we improve the status quo by reducing this source of
bugs (that are still there and deployed to users).

This is also the approach other browsers take.

There might be a better approach (we'll have to research that, can't
do it based on gut), but until that is there improving the status quo
is a very good thing for our users.




I followed up in https://bugzilla.mozilla.org/show_bug.cgi?id=910192

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Feedback wanted on font checker

2013-08-13 Thread Axel Hecht

On 8/13/13 8:05 AM, Karl Tomlinson wrote:

On Fri, 09 Aug 2013 13:51:45 +0200, Axel Hecht wrote:


To clarify, the tool does support either .name. or .name-list. at
this point. Is there a code path or a setup where we have for any
language/family both a .name. and a .name-list. entry?

I.e.

pref(font.name.serif.zh-TW, Times);
pref(font.name-list.serif.zh-TW, Droid Serif, Droid Sans Fallback);

Just a randomly constructed example.


The .name. and .name-list. are essentially concatenated into a
single list, even though that is less clear in the second piece of
code here:

http://hg.mozilla.org/mozilla-central/annotate/c146d402a55f/gfx/thebes/gfxPlatform.cpp#l1019
http://hg.mozilla.org/mozilla-central/annotate/c146d402a55f/gfx/thebes/gfxFont.cpp#l4156



OK, that was easy to add, 
https://github.com/Pike/font-tool/commit/707bff7b6e87f695038bac1f80ba66e2b216593d.


Thanks

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Feedback wanted on font checker

2013-08-13 Thread Axel Hecht

On 8/9/13 5:28 PM, Jonathan Kew wrote:

On 9/8/13 15:36, Axel Hecht wrote: So I created three test cases based
on the data I see, Greek and
  Bulgarian monospace and Hindi sans-serif. They're linked off of
  http://pike.github.io/fonts/. It's prerendered images on the left
  column, and regular text on the right.
 
  Hindi is blank squares as I expect, but for Bulgarian and Greek, I see
  Bulgarian and Greek letters on my 1.1 unagi.
 
  It'd be great to get some help as to what I actually see there. Oh
fonts.

Font fallback means that if there's ANY font on your Unagi that includes
Cyrillic (or Greek) characters, you'll see them rather than blank boxes,
even if that font isn't what's listed in the font.name prefs (And it may
not even be monospaced... it's probably falling back to Open Sans or
Charis SIL or something.)

JK



I've put a UnicodeHex font after the monospace or whatever now, so 
layout won't fallback beyond the monospace or sans-serif font families.


Not sure if that's closer to what we actually want to test :-)

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Feedback wanted on font checker

2013-08-09 Thread Axel Hecht

On 8/9/13 1:27 PM, Karl Tomlinson wrote:

Axel Hecht writes:


On 8/8/13 10:45 PM, Karl Tomlinson wrote:

Axel Hecht writes:


On 8/8/13 5:17 PM, Jonathan Kew wrote:

On 8/8/13 15:17, Axel Hecht wrote:

Couter example seems to be Chinese, the unagi shows something, while my
tool reports 13k missing glyphs for zh-TW.



If we're using Droid Sans Fallback, I believe it supports somewhere well
over 20,000 Chinese characters, so that probably includes all those in
fontconfig's list - but that still doesn't mean it has *every* possible
Chinese character.


Yeah, I see DroidSansFallback in fonts.mk in moztt,
https://github.com/mozilla-b2g/moztt/blob/master/fonts.mk#L31, but
how would we pick that up?


I suspect you want to add font.name-list support to your tool.
These fonts should be searched by the product after font.name.

pref(font.name-list.serif.zh-TW, Droid Serif, Droid Sans Fallback);
pref(font.name-list.sans-serif.zh-TW, Roboto, Droid Sans, Droid Sans
Fallback);
pref(font.name-list.monospace.zh-TW, Droid Sans Fallback);



Right. I pick up both, but not name-list as fallback to
name. Basically, what comes last in all.js wins right now.

Is there a reason why font.name.* isn't just the first in
font.name-list.* ? I.e., why do we have two of them, and does it
matter for what I'm trying to do?


This was set up before I was around, so I'm not clear on the
reasoning.  Some comments [1] point out that only .name. is
configurable in Firefox Preferences UI.  Perhaps that would even
support the idea of .name. remaining in the .name-list. in case
some other font accidentally selected by the user was not helpful.

I think you do want to check both preferences for the situation
where a Latin font may have been inserted in .name. before a
native font in .name-list. in an attempt to get better Latin
characters.

I suspect usually one font will cover the language, so your tool
may not need to find the union the ranges or intersection of missing.

At one stage IIRC only these fonts were being checked on Android,
while desktop platforms had a fallback search through every font
on the system.  That has changed a bit with desktop platforms not
always checking exhaustively, for perf reasons, and it may have
changed on Android also.

Even if other system fonts are checked in fallback, it is
important that the .name. and .name-list. fonts cover their
language, or you can get an awkward mix of characters from
different fonts that don't work together.  That can be as simple
as a collage of different characters, or as bad as not working at
all because some scripts require the presentation of characters to
depend on neighbouring characters and this is not supported across
a mix of fonts.

[1] 
http://hg.mozilla.org/mozilla-central/annotate/e33c2011643e/layout/base/nsPresContext.cpp#l485



Yeah, we've seen those on our website a few times. Ughly, and a reason 
why I started writing this tool.


To clarify, the tool does support either .name. or .name-list. at this 
point. Is there a code path or a setup where we have for any 
language/family both a .name. and a .name-list. entry?


I.e.

pref(font.name.serif.zh-TW, Times);
pref(font.name-list.serif.zh-TW, Droid Serif, Droid Sans Fallback);

Just a randomly constructed example.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Feedback wanted on font checker

2013-08-09 Thread Axel Hecht

On 8/9/13 1:51 PM, Axel Hecht wrote:

On 8/9/13 1:27 PM, Karl Tomlinson wrote:

Axel Hecht writes:


On 8/8/13 10:45 PM, Karl Tomlinson wrote:

Axel Hecht writes:


On 8/8/13 5:17 PM, Jonathan Kew wrote:

On 8/8/13 15:17, Axel Hecht wrote:

Couter example seems to be Chinese, the unagi shows something,
while my
tool reports 13k missing glyphs for zh-TW.



If we're using Droid Sans Fallback, I believe it supports
somewhere well
over 20,000 Chinese characters, so that probably includes all
those in
fontconfig's list - but that still doesn't mean it has *every*
possible
Chinese character.


Yeah, I see DroidSansFallback in fonts.mk in moztt,
https://github.com/mozilla-b2g/moztt/blob/master/fonts.mk#L31, but
how would we pick that up?


I suspect you want to add font.name-list support to your tool.
These fonts should be searched by the product after font.name.

pref(font.name-list.serif.zh-TW, Droid Serif, Droid Sans Fallback);
pref(font.name-list.sans-serif.zh-TW, Roboto, Droid Sans, Droid Sans
Fallback);
pref(font.name-list.monospace.zh-TW, Droid Sans Fallback);



Right. I pick up both, but not name-list as fallback to
name. Basically, what comes last in all.js wins right now.

Is there a reason why font.name.* isn't just the first in
font.name-list.* ? I.e., why do we have two of them, and does it
matter for what I'm trying to do?


This was set up before I was around, so I'm not clear on the
reasoning.  Some comments [1] point out that only .name. is
configurable in Firefox Preferences UI.  Perhaps that would even
support the idea of .name. remaining in the .name-list. in case
some other font accidentally selected by the user was not helpful.

I think you do want to check both preferences for the situation
where a Latin font may have been inserted in .name. before a
native font in .name-list. in an attempt to get better Latin
characters.

I suspect usually one font will cover the language, so your tool
may not need to find the union the ranges or intersection of missing.

At one stage IIRC only these fonts were being checked on Android,
while desktop platforms had a fallback search through every font
on the system.  That has changed a bit with desktop platforms not
always checking exhaustively, for perf reasons, and it may have
changed on Android also.

Even if other system fonts are checked in fallback, it is
important that the .name. and .name-list. fonts cover their
language, or you can get an awkward mix of characters from
different fonts that don't work together.  That can be as simple
as a collage of different characters, or as bad as not working at
all because some scripts require the presentation of characters to
depend on neighbouring characters and this is not supported across
a mix of fonts.

[1]
http://hg.mozilla.org/mozilla-central/annotate/e33c2011643e/layout/base/nsPresContext.cpp#l485




Yeah, we've seen those on our website a few times. Ughly, and a reason
why I started writing this tool.

To clarify, the tool does support either .name. or .name-list. at this
point. Is there a code path or a setup where we have for any
language/family both a .name. and a .name-list. entry?

I.e.

pref(font.name.serif.zh-TW, Times);
pref(font.name-list.serif.zh-TW, Droid Serif, Droid Sans Fallback);

Just a randomly constructed example.

Axel



So I created three test cases based on the data I see, Greek and 
Bulgarian monospace and Hindi sans-serif. They're linked off of 
http://pike.github.io/fonts/. It's prerendered images on the left 
column, and regular text on the right.


Hindi is blank squares as I expect, but for Bulgarian and Greek, I see 
Bulgarian and Greek letters on my 1.1 unagi.


It'd be great to get some help as to what I actually see there. Oh fonts.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Feedback wanted on font checker

2013-08-08 Thread Axel Hecht

Hi,

I'm looking for a review of some rather hacky tool I just created to see 
if the fonts on b2g actually support a particular language.


https://github.com/Pike/font-tool

Basic outline of what the tool does:

Parses langGroups.properties to see which locale has which group, with 
default to x-unicode.


Preprocesses all.js with -DANDROID -DMOZ_WIDGET_GONK, and parses
pref(font.name)

Uses fc-scan to find all the fonts in moztt and 
platform/frameworks/base, pick those with style Regular.


Uses fc-validate to figure out if those fonts actually support the given 
locale.


Example, Bulgarian seems to be missing 60 glyphs for monospace:

python buildfonts.py /src/central/mozilla-central/ bg
monospace
moztt/SourceCodePro-1.017/SourceCodePro-Regular.ttf:0 Missing 60 
glyph(s) to satisfy the coverage for bg language

sans-serif
base/data/fonts/Roboto-Regular.ttf:0 Satisfy the coverage for bg language
serif
moztt/CharisSILCompact-4.114/CharisSILCompact-R.ttf:0 Satisfy the 
coverage for bg language


Couter example seems to be Chinese, the unagi shows something, while my 
tool reports 13k missing glyphs for zh-TW.


All of this has been mostly trial and error and stabbing in the dark, 
it'd be great if I could get some feedback and comments.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Feedback wanted on font checker

2013-08-08 Thread Axel Hecht

Hi Jonathan,

thanks for the feedback, more inline.

On 8/8/13 5:17 PM, Jonathan Kew wrote:

On 8/8/13 15:17, Axel Hecht wrote:

Hi,

I'm looking for a review of some rather hacky tool I just created to see
if the fonts on b2g actually support a particular language.

https://github.com/Pike/font-tool

Basic outline of what the tool does:

Parses langGroups.properties to see which locale has which group, with
default to x-unicode.

Preprocesses all.js with -DANDROID -DMOZ_WIDGET_GONK, and parses
pref(font.name)

Uses fc-scan to find all the fonts in moztt and
platform/frameworks/base, pick those with style Regular.

Uses fc-validate to figure out if those fonts actually support the given
locale.

Example, Bulgarian seems to be missing 60 glyphs for monospace:

python buildfonts.py /src/central/mozilla-central/ bg
monospace
moztt/SourceCodePro-1.017/SourceCodePro-Regular.ttf:0 Missing 60
glyph(s) to satisfy the coverage for bg language


That sounds plausible, as SourceCodePro does not yet have Cyrillic
support. (Do we have Droid Sans Mono on the device? If so, we should
probably be falling back to that.)


Nice, filed https://bugzilla.mozilla.org/show_bug.cgi?id=903038.


sans-serif
base/data/fonts/Roboto-Regular.ttf:0 Satisfy the coverage for bg language
serif
moztt/CharisSILCompact-4.114/CharisSILCompact-R.ttf:0 Satisfy the
coverage for bg language

Couter example seems to be Chinese, the unagi shows something, while my
tool reports 13k missing glyphs for zh-TW.


Not surprising, really. Two issues here: first, we rely on font fallback
to find a font that supports a given character, if the default specified
by prefs doesn't have it. In the case of Chinese, I think we tend to
list a Latin font so that it will be used (rather than the often-ugly
Latin glyphs in a Chinese font) for Latin characters, which are commonly
found mixed in to Chinese pages, and then rely on fallback to find the
actual Chinese font when needed.

So you'd need to check the fallback font (probably Droid Sans Fallback,
unless there's something else we're shipping on b2g), not necessarily
the font listed in prefs.

And second, validating the character coverage of a Chinese font is a
bit tricky - you'll need to specify more carefully what the exact
requirements are. For Chinese, there are tens of thousands of glyphs
that are part of the writing system, but most everyday text will only
use a relatively small subset - still several thousand, but nowhere near
everything.

The file fontconfig/tree/fc-lang/zh_tw.orth notes that it is Made by
trimming the Big5 - unicode mapping down to just Chinese glyphs, which
results in a list of around 13,000 characters. In contrast, according to
[1] (though estimates will vary, no doubt), An educated Chinese person
will know about 8,000 characters, but you will only need about 2-3,000
to be able to read a newspaper. So for most practical purposes, that
collection should be (more than) sufficient.

If we're using Droid Sans Fallback, I believe it supports somewhere well
over 20,000 Chinese characters, so that probably includes all those in
fontconfig's list - but that still doesn't mean it has *every* possible
Chinese character.


Yeah, I see DroidSansFallback in fonts.mk in moztt, 
https://github.com/mozilla-b2g/moztt/blob/master/fonts.mk#L31, but how 
would we pick that up?


Axel



JK


[1]
http://www.bbc.co.uk/languages/chinese/real_chinese/mini_guides/characters/characters_howmany.shtml




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Feedback wanted on font checker

2013-08-08 Thread Axel Hecht

On 8/8/13 10:45 PM, Karl Tomlinson wrote:

Axel Hecht writes:


On 8/8/13 5:17 PM, Jonathan Kew wrote:

On 8/8/13 15:17, Axel Hecht wrote:

Couter example seems to be Chinese, the unagi shows something, while my
tool reports 13k missing glyphs for zh-TW.



If we're using Droid Sans Fallback, I believe it supports somewhere well
over 20,000 Chinese characters, so that probably includes all those in
fontconfig's list - but that still doesn't mean it has *every* possible
Chinese character.


Yeah, I see DroidSansFallback in fonts.mk in moztt,
https://github.com/mozilla-b2g/moztt/blob/master/fonts.mk#L31, but
how would we pick that up?


I suspect you want to add font.name-list support to your tool.
These fonts should be searched by the product after font.name.

pref(font.name-list.serif.zh-TW, Droid Serif, Droid Sans Fallback);
pref(font.name-list.sans-serif.zh-TW, Roboto, Droid Sans, Droid Sans 
Fallback);
pref(font.name-list.monospace.zh-TW, Droid Sans Fallback);



Right. I pick up both, but not name-list as fallback to name. Basically, 
what comes last in all.js wins right now.


Is there a reason why font.name.* isn't just the first in 
font.name-list.* ? I.e., why do we have two of them, and does it matter 
for what I'm trying to do?


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Three RDFa-related W3C Proposed (Edited) Recommendations

2013-07-17 Thread Axel Hecht
I've only quickly glanced at those, and I haven't followed those 
discussions at all, I have to admit.


Are there any practical consequences for gecko/firefox? It doesn't look 
like it would, in particular when looking at the reference 
implementations being all on top of html platforms.


Axel

On 7/17/13 1:12 AM, L. David Baron wrote:

The W3C has released three RDFA-related documents, one proposed
recommendation:

   HTML+RDFa 1.1:
   http://www.w3.org/TR/2013/PR-html-rdfa-20130625/

and two proposed edited recommendations (which contain only
editorial changes):

   RDFa 1.1 Core:
   http://www.w3.org/TR/2013/PER-rdfa-core-20130625/

   XHTML+RDFa 1.1
   http://www.w3.org/TR/2013/PER-xhtml-rdfa-20130625/

There's a call for review to W3C member companies (of which Mozilla
is one) open until Tuesday, July 23 (one week from today).

If there are comments you think Mozilla should send as part of the
review, or if you think Mozilla should voice support or opposition
to the specification, please say so in this thread.  (I'd note,
however, that there have been many previous opportunities to make
comments, so it's somewhat bad form to bring up fundamental issues
for the first time at this stage.)

There was one formal objection earlier in the process, whose history
is documented in
http://lists.w3.org/Archives/Public/public-rdfa-wg/2013Jan/0057.html

-David



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: review stop-energy (was 24hour review)

2013-07-11 Thread Axel Hecht

On 7/11/13 8:24 PM, Jeff Walden wrote:

On 07/11/2013 03:09 AM, Nicholas Nethercote wrote:

On Thu, Jul 11, 2013 at 7:48 AM, Jeff Walden jwalden+...@mit.edu wrote:


Establishing one-day turnaround time on reviews, or on requests, would require 
a lot more polling on my review queue.


You poll your review queue?  Like, by visiting your Bugzilla
dashboard, or something like that?  That's *awful*.

I personally use a push notification system called email with
filters.  Well, strictly speaking it's poll-like because I have to
check my high priority bugs folder, but I do that anyway multiple
times per day so I'm unlikely to take more than an hour or two (while
working) to notice a review request.


I have 
https://bugzilla.mozilla.org/request.cgi?requestee=jwalden%2Bbmo%40mit.edudo_union=1group=typeaction=queue
 open in a browser tab and check it from time to time.  I don't see how that's any 
different from a mail-filtering folder except in terms of the UI.  They're both polling, as 
you note.  :-)

Jeff



I wish I could watch more than one requestee in one page. I actually 
have a tab with a history of three bugzilla accounts' requests page, and 
go back and forth and reload.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Embracing git usage for Firefox/Gecko development?

2013-07-10 Thread Axel Hecht

On 5/31/13 10:14 PM, Johnny Stenback wrote:

On 5/31/2013 12:32 AM, Mike Hommey wrote:
[...]

Option 1 is where I personally think it's worth investing effort. It
means we'd need to set up an atomic bidirectional bridge between hg and
git (which I'm told is doable, and there are even commercial solutions
for this out there that may solve this for us). Assuming we solve the
bridge problem one way or another, it would give us all the benefits
listed above, plus developer tool choice, and we could roll this out
incrementally w/o the need to change all of our infrastructure at once.
I.e. our roll out could look something like this:

1. create a read only, official mozilla-central git mirror
2. add support for pushing to try with git and see the results in tbpl
3. update tbpl to show git revisions in addition to hg revisions
4. move to project branches, then inbound, then m-c, release branches, etc


Another way to look at this would be to make the git repository the
real central source, and keep the mercurial branches as clones of it,
with hg-git (and hg-git supports pushing to git, too).

This would likely make it easier to support pushing to both, although
we'd need to ensure nobody pushes octopus merges in the git repo.


Yup, could be, and IMO the main point is that we'd have a lot of
flexibility here.


Option 2 is where this discussion started (in the Tuesday meeting a few
weeks ago,
https://wiki.mozilla.org/Platform/2013-05-07#Should_we_switch_from_hg_to_git.3F).
Since then I've had a number of conversations and have been convinced
that a wholesale change is the less attractive option. The cost of a
wholesale change will be *huge* on the infrastructure end, to a point
where we need to question whether the benefits are worth the cost. I
have also spoken with other large engineering orgs about git performance
limitations, one of which is doing the opposite switch, going from git
to hg.


I bet this is facebook. Their usecase includes millions of changesets
with millions of files (iirc, according to posts i've seen on the git
list).


I've promised not to mention names here, so I won't confirm nor deny...
but the folks I've been talking to mostly have a repo that's a good bit
less than a single order of magnitude larger than m-c, so a couple of
hundred k files, not millions. And given the file count trend in m-c
(see attached image for an approximation), that doesn't make me feel too
good about a wholesale switch given the work involved in doing so.


I wouldn't be surprised if depth of tree was an impacting perf, given 
how git stores directories, with trees, tree children and refs.


I would think that 1000 files in the top level dir perform better than a 
1000 files in 20 dirs depth.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Embracing git usage for Firefox/Gecko development?

2013-07-10 Thread Axel Hecht
Weirdly enough, I'm hoping we're using one or the other, and I think git 
is more promising. Yes, I need to rewrite a bunch of stuff l10n-wise, 
but still.


I actually think that we should aim high. Don't bother about command 
lines, but what takes us to a system where people can just contribute to 
Firefox and Gecko on the web.


Find the bug, click a button, start editing in your browser, try, 
review, merge.


If we can get that working without involving a single thing but a 
browser, than we're making a change. git command line vs hg command line 
doesn't bring that change.


The best model people have come up so far is a fork per bug (LegNeato at 
the time) or per user (github).


I see how someone can serve a few 100k of forks (see our bugcount) on 
git storage, but I don't see that with hg.


That to me is the compelling argument.

Doing both hg and git sounds like we'll get the worst of both worlds.

I'm also advocating for taking hosting dead serious. We're at least 
struggling with the amount of repos we're serving on hg right now, 
adding more complexity won't make the systems more stable. Also, Vicent 
(githubber) has a great talk about the mistakes they did based on git, 
http://vimeo.com/64716825 (30 mins). They're well past the approaches 
we're currently at on the hg side.


If we don't have to be compatible with hg, we can also rethink the 
constraints that's putting on us for merge days, etc.


I know that option 2 isn't a quick path, and I love the beauty of hg, 
but we've failed to use that beauty to make the next game changing 
infrastructure to support contributors, IMHO. I can see us having a 
better chance with git in the backend.


Quick notes on hackability:

In terms of stable and reliable hacking, hg and git are on par. Shell 
out to the command line tools, parse the output.
hg being mostly in python is nice for python hacking, but the code paths 
you're hooking in to are far from stable. I do that extensively, and 
quite a few ports have been painful. Scripting hg outside of python, 
well, yes.
git has libgit2 now, which is a very basic C impl, and jgit, a java 
implementation of git. Bindings for libgit2 exist in many languages, but 
only the ruby and C# are really good. In particular the python binding 
is far from being pythonic, and from being complete. If it's the right 
base to create a pythonic api is TBD. Regarding bugzilla integration, 
there are perl bindings that get modifications. I refuse to know perl 
good enough to make any statement on the value of the perl bindings, though.


Axel

On 5/31/13 2:56 AM, Johnny Stenback wrote:

[TL;DR, I think we need to embrace git in addition to hg for
Firefox/Gecko hacking, what do you think?]

Hello everyone,

The question of whether Mozilla engineering should embrace git usage for
Firefox/Gecko development has come up a number of times already in
various contexts, and I think it's time to have a serious discussion
about this.

To me, this question has already been answered. Git is already a reality
at Mozilla:

1. Git is in use exclusively for some of our significant projects (B2G,
Gaia, Rust, Servo, etc)
2. Lots of Gecko hackers use git for their work on mozilla-central,
through various conversions from hg to git.

What we're really talking about is whether we should embrace git for
Firefox/Gecko development in mozilla-central.

IMO, the benefits for embracing git are:

   * Simplified on-boarding, most of our newcomers come to us
 knowing git (thanks to Github etc), few know hg.
   * We already mirror hg to git (in more ways than one), and
 git is already a necessary part of most of our lives.
 Having one true git repository would simplify developers'
 lives.
   * Developers can use git branches. They just work,
 and they're a good alternative to patch queues.
   * Developers can benefit from the better merge algorithms
 used by git.
   * Easier collaboration through shared branches.
   * We could have full history in git, including all of hg
 and CVS history since 1998!
   * Git works well with Github, even though we're not switching
 to Github as the ultimate source of truth (more on that below).

Some of the known issues with embracing git are:

   * Performance of git on windows is sub-optimal (we're
 already working on it).
   * Infrastructure changes needed...

So in other words, I think there's significant value in embracing git
and I think we should make it easier to hack on Gecko/Firefox with git.
I see two ways to do that:

1: Embrace both git and hg as a first class DVCS.
2: Switch wholesale to git.

Option 1 is where I personally think it's worth investing effort. It
means we'd need to set up an atomic bidirectional bridge between hg and
git (which I'm told is doable, and there are even commercial solutions
for this out there that may solve this for us). Assuming we solve the
bridge problem one way or another, it would give us all the benefits
listed above, plus developer 

Re: Replacing Gecko's URL parser

2013-07-10 Thread Axel Hecht

On 7/1/13 8:30 PM, Gavin Sharp wrote:

.sOn Mon, Jul 1, 2013 at 10:58 AM, Benjamin Smedberg
benja...@smedbergs.us wrote:

Idempotent: Currently Gecko's parser and the URL Standard's parser are
not idempotent. E.g. http://@/mozilla.org/ becomes
http:///mozilla.org/ which when parsed becomes http://mozilla.org/
which is somewhat bad for security. My plan is to change the URL
Standard to fail parsing empty host names. I'll have to research if
there's other cases that are not idempotent.


I don't actually know what this means. Are you saying that
http://@/mozilla.org/; sometimes resolves to one URI and sometimes another?


function makeURI(str) ioSvc.newURI(str, null, null)

makeURI(http://@/mozilla.org/;).spec - http:///mozilla.org/
makeURI(http:///mozilla.org/;).spec - http://mozilla.org/

In other words,

makeURI(makeURI(str).spec).spec does not always return str.

Gavin



nitpicking, that's not not idempotent. It's not round-tripping, but it 
looks like it's idempotent.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing Gecko's URL parser

2013-07-10 Thread Axel Hecht

On 7/3/13 8:49 AM, Anne van Kesteren wrote:

On Tue, Jul 2, 2013 at 12:09 PM, Benjamin Smedberg
benja...@smedbergs.us wrote:

Both resource: and chrome: have host names and need to support relative
URIs. Neither of them is a candidate for standardization, though. We should
just add them as special known schemes in our parser.


Well, either we have to standardize their parsing behavior, limit
their parsing behavior to chrome, or think of some third alternative.
We do not want

url = new URL(rel, base)

to differ across engines for any rel or base.




How many odd protocols and assumptions on how they work do we still have 
in mailnews' abuse of RDF? Not sure what's lurking in localstore.rdf and 
mimeTypes.rdf.


Also, sorry, can't offer more help than asking these days.

Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Geolocation and the Mac

2013-05-22 Thread Axel Hecht

On 5/22/13 1:45 AM, Doug Turner wrote:

In Bug 874587, we are considering using Core Location as the default
geolocation provider on the Mac.  This would replace the use of the
NetworkGeolocationProvider (that currently points to GLS).  After code
reviews, we plan to enable this on Nightly and see how it goes.

On Android, we already due use the system location provider and not the
NetworkGeolocationProvider... so this isn't something unexpected.

The main difference is that you will get one prompt from the OS the
first time you use geolocation from Firefox -- just like every other
standard Mac application.

Does anyone have any concern that is specific to change from the
NetworkGeolocationProvider to a Mac platform specific one?


Asking the other way around, why are we doing this? Ad hoc it just looks 
like more code to maintain.


Also, is there documentation on how the mac does geo location?

We also make statements about our requirements on 3rd party location 
services in https://www.mozilla.org/en-US/legal/privacy/firefox.html. 
Depending on how mac locates, those may or may not hold?


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-30 Thread Axel Hecht

On 4/30/13 8:46 AM, Gregory Szorc wrote:

On 4/26/2013 12:17 PM, Ryan VanderMeulen wrote:

Specific goals:
-Offer an alternative branch for developers to push to during extended
inbound closures
-Avoid patch pile-up after inbound re-opens from a long closure

Specific non-goals:
-Reducing infrastructure load
-Changing pushing strategies from the widely-accepted status quo (i.e.
multi-headed approach)
-Creating multiple integration branches that allow for simultaneous
pushing (i.e. inbound-b2g, inbound-gfx, etc)

My proposal:
-Create an inbound2 branch identically configured to mozilla-inbound.
-Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
-In the event of a long tree closure, the last green changeset from
m-i will be merged to inbound2 and inbound2 will be opened for checkins.
---It will be a judgment call for sheriffs as to how long of a closure
will suffice for opening inbound2.
-When the bustage on m-i is resolved and it is again passing tests,
inbound2 will be closed again.
-When all pending jobs on inbound2 are completed, it will be merged to
m-i.
-Except under extraordinary circumstances, all merges to
mozilla-central will continue to come from m-i ONLY.
-If bustage lands on inbound2, then both trees will be closed until
resolved. Tough. We apparently can't always have nice things.


If you consider that every repository is essentially a clone of
mozilla-central, what we have *now* is effectively equivalent to a
single repository with multiple heads/branches/bookmarks. However, the
different heads/branches/bookmarks differ in:

* How much attention sheriffs give them.
* The automation configuration (coalescing, priority, etc).
* Policies around landing.
* How developers use it.

These are all knobs in our control.

When we say create an inbound2, we're essentially establishing a new
head/branch/bookmark that behaves much like inbound1 with a slightly
different landing policy. If that's what we want to do, sure. I think
it's better than a single, frequently closed inbound.

Anyway, no matter how much I think about this proposal, I keep coming
back to the question of why don't we use project branches more?
Instead of everyone and her brother landing on inbound, what if more
landings were performed on {fx-team, services-central, wood-named
twig, etc}? I /think/ the worst that can happen is merge conflicts and
bit rot. And, we can abate that through intelligent grouping of related
commits in the same repository, frequent merges, and maybe even better
communication (perhaps even automatically with tools that alert
developers to potential conflicts - wouldn't it be cool if you updated a
patch and Mercurial was like o hai - Ehsan recently pushed a Try push
that conflicts with your change: you two should talk.).

As a counter-proposal, I propose that we start shifting landings to
project branches/twigs. We should aim for a small and well-defined set
of repositories (say 3 to 5) sharing similar automation configuration
and sheriff love. By keeping the number small, it's easy to figure out
where something should land and it's not too much of an extra burden on
sheriffs. We can still keep inbound, but it should only be reserved for
major, cross-repository landings with multi-module impact (e.g. build
system changes), merges from the main landing repositories (unless we
merge straight to central), and possibly as a backup in case one of the
landing repositories is closed.

We can test this today with very little effort: we figure out how to
bucket commits, re-purpose existing repositories/twigs, and see what
happens. If it works: great - we've just validated that distributed
version control works for Firefox development (as opposed to the
CVS/Subversion workflow we're currently using with inbound). If not, we
can try variations and/or the inbound2 idea.

Is there sanity to this proposal or am I still crazy?



To me the caveat here is that merge conflicts aren't restricted to VC 
merge conflicts. Changes to APIs in one branch with a new customer in 
another is probably the most frequent. Naming conflicts between two jsms 
in different modules, conflicting contract ids, etc.


I think that the cost of getting to a single revision that works out of 
multiple independent branches in a piece of software as big and modular 
as firefox has to be significant.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Axel Hecht

On 4/22/13 9:54 PM, Kartikaya Gupta wrote:

TL;DR:
* Inbound is closed 25% of the time
* Turning off coalescing could increase resource usage by up to 60% (but
probably less than this).
* We spend 24% of our machine resources on changes that are later backed
out, or changes that are doing the backout
* The vast majority of changesets that are backed out from inbound are
detectable on a try push


Do we know how many of these have been pushed to try, and 
passed/compiled what they'd fail later?


I expect some cost of regressions to come from merging/rebasing, and 
it'd be interesting to know how much of that you can see in the data 
window you looked at.


has been pushed to try is obviously tricky to find out, in particular 
on rebases, and possibly modified patches during the rebase.


Axel



Because of the large effect from coalescing, any changes to the current
process must not require running the full set of tests on every push.
(In my proposal this is easily accomplished with trychooser syntax, but
other proposals include rotating through T-runs on pushes, etc.).

--- Long verion below ---

Following up from the infra load meeting we had last week, I spent some
time this weekend crunching various pieces of data on mozilla-inbound to
get a sense of how much coalescing actually helps us, how much backouts
hurt us, and generally to get some data on the impact of my previous
proposal for using a multi-headed tree. I didn't get all the data that I
wanted but as I probably won't get back to this for a bit, I thought I'd
share what I found so far and see if anybody has other specific pieces
of data they would like to see gathered.

-- Inbound uptime --

I looked at a ~9 day period from April 7th to April 16th. During this time:
* inbound was closed for 24.9587% of the total time
* inbound was closed for 15.3068% of the total time due to bustage.
* inbound was closed for 11.2059% of the total time due to infra.

Notes:
1) bustage and infra were determined by grep -i on the data from
treestatus.mozilla.org.
2) There is some overlap so bustage + infra != total.
3) I also weighted the downtime using checkins-per-hour histogram from
joduinn's blog at [1], but this didn't have a significant impact: the
total, bustage, and infra downtime percentages moved to 25.5392%,
15.7285%, and 11.3748% respectively.

-- Backout changes --

Next I did an analysis of the changes that landed on inbound during that
time period. The exact pushlog that I looked at (corresponding to the
same April 7 - April 16 time period) is at [2]. I removed all of the
merge changesets from this range, since I wanted to look at inbound in
as much isolation as possible.

In this range:
* there were a total of 916 changesets
* there were a total of 553 pushes
* 74 of the 916 changesets (8.07%) were backout changesets
* 116 of the 916 changesets (12.66%) were backed out
* removing all backouts and changes backed out removed 114 pushes (20.6%)

Of the 116 changesets that were backed out:
* 37 belonged to single-changeset pushes
* 65 belonged to multi-changeset pushes where the entire pushed was
backed out
* 14 belonged to multi-changeset pushes where the changesets were
selectively backed out

Of the 74 backout changesets:
* 4 were for commit message problems
* 25 were for build failures
* 36 were for test failures
* 5 were for leaks/talos regressions
* 1 was for premature landing
* 3 were for unknown reasons

Notes:
1) There were actually 79 backouts, but I ignored 5 of them because they
backed out changes that happened prior to the start of my range).
2) Additional changes at the end of my range may have been backed out,
but the backouts were not in my range so I didn't include them in my
analysis.
3) The 14 csets that were selectively backed out is interesting to me
because it implies that somebody did some work to identify which changes
in the push were bad, and this naturally means that there is room to
save on doing that work.

-- Merge conflicts --

I also wanted to determine how many of these changes conflicted with
each other, and how far away the conflicting changes were. I got a
partial result here but I need to do more analysis before I have numbers
worth posting.

-- Build farm resources --

Finally, I used a combination of gps' mozilla-build-analyzer tool [3]
and some custom tools to determine how much machine time was spent on
building all of these pushes and changes.

I looked at all the build.json files [4] from the 6th of April to the
17th of April and pulled out all the jobs that corresponding to the
push changesets in my range above. For this set of 553 changesets,
there were 500 (exactly!) distinct builders. 111 of these had -pgo
or _pgo in the name, and I excluded them. I created a 553x389 matrix
with the remaining builders and filled in how much time was spent on
each changeset for each builder (in case of multiple jobs, I added the
times).

Then I assumed that any empty field in the 553x389 matrix was a result
of coalescing. 

Re: Some data on mozilla-inbound

2013-04-23 Thread Axel Hecht

On 4/23/13 6:35 PM, Kartikaya Gupta wrote:

On 13-04-23 03:57 , Axel Hecht wrote:

Do we know how many of these have been pushed to try, and
passed/compiled what they'd fail later?


I haven't looked at this. It would be useful to know but short of
pulling patches and using some similarity heuristic or manually
examining patches I can't think of a way to get this data.


I expect some cost of regressions to come from merging/rebasing, and
it'd be interesting to know how much of that you can see in the data
window you looked at.


This is something I did try to determine, by looking at the number of
conflicts between patches in my data window. My algorithm was basically
this:
1) Sync a tree to the last cset in the range
2) Iterate through each push backwards, skipping merges, backouts, and
changes that are later backed out
3) For each of these pushes, try to qpush a backout of it.
4) If the attempted qpush fails, that means there is another change that
landed since that one that there is a merge conflict with.

The problem here is that the farther back you go the more likely it is
that you will run into conflicting changes, because an increasing
portion of the data window is checked for conflicts when really you
probably only want to test some small number of changes (~30?). Using
this approach I got 129 conflicts, and as expected, the rate at which I
encountered conflicts went up as I went farther back. I didn't get
around to trying the sliding window approach which I believe will give a
more representative (and much lower) count. My code for doing this is in
the bottom half of [1] if you (or anybody else) wants to give that a shot.


I expect that only a part of our programmatic merge conflicts are 
actually version control merge conflicts. There are a lot of cases like 
modifications to supposedly internal properties in toolkit starting to 
get a new usecase in browser, a define changing or disappearing, etc.


All those invalidate the testing of the patch that has been done to some 
extent, and don't involve modifications to the same lines of code, which 
is all that version control catches.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Nightly *very* crashy on OSX

2013-04-21 Thread Axel Hecht

Hi,

I'm having a very crashy nightly, uptime below an hour, not really bound 
to a site.


Might be https://bugzilla.mozilla.org/show_bug.cgi?id=864125, but I've 
experienced a bunch of crashes, all with pretty non-existing stack 
traces of no or one frame.


bp-48ad9b29-145f-49ec-b282-5538f2130421 4/21/13 3:27 PM
bp-bea9322a-ab85-4586-8f26-bfbcb2130421 4/21/13 2:45 PM
bp-b3b43fa7-4c37-4f92-8d17-c82802130420 4/20/13 10:59 PM
bp-7e7f70e9-85c9-4fd2-a2d6-31c892130420 4/20/13 8:34 PM
bp-3faed1dd-98bb-4448-997b-db6f22130420 4/20/13 8:16 PM
bp-5440caa0-7ebc-48e2-bd15-7fcf12130416 4/17/13 12:44 AM
bp-3dbd9606-7d63-4a90-957a-98f772130416 4/17/13 12:32 AM
bp-2b7ac91d-1110-4780-9370-89a372130416 4/17/13 12:31 AM

Any ideas?

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Decoupling the build config via moz.build files (nuking all.js)

2013-03-15 Thread Axel Hecht

On 15.03.13 20:06, Benjamin Smedberg wrote:

On 3/15/2013 2:33 PM, Gregory Szorc wrote:



I /think/ our current spaghetti configuration is a historical artifact
from using Makefile.in's to define the build config combined with the
complexity required to do things right.

Yes, I believe you are mostly correct.



With moz.build files, we now have a mechanism to facilitate
decentralized declaration of configuration and metadata. For example,
instead of defining all preferences in
/modules/libpref/src/init/all.js, we could have multiple prefs files
each declared through multiple moz.build files. moz.build tree
traversal would discover all relevant prefs files and the build code
in /modules/libpref would simply concatenate that set of files
together. No action outside of your module would be required!

Note that we already started doing this using makefile-fu, see
http://mxr.mozilla.org/mozilla-central/source/modules/libpref/src/Makefile.in#44
We should definitely continue.

*Also* note that we actually have two different files:

all.js is the defaults for the Mozilla platform, including
Tbird/Seamonkey and all XULRunner apps.
firefox.js is where Firefox-specific prefs and overrides typically
should live.

--BDS



... + firefox-l10n.js, for locale-specific settings.

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Review of changes to Web compat-sensitive prefs in localizations

2013-02-27 Thread Axel Hecht

On 27.02.13 09:30, Henri Sivonen wrote:

On Fri, Feb 22, 2013 at 8:03 PM, Axel Hecht l...@mozilla.com wrote:

On 22.02.13 18:41, Henri Sivonen wrote:


On Feb 22, 2013 5:30 PM, Axel Hecht l...@mozilla.com wrote:


There's just no other way than post-mortem work. That's one of the


reasons why we're not taking arbitrary changesets to ship to any audience
beyond aurora and nightly, for beta and release, we got to have technical
checks in place.

Where should I file bugs to add checks to this set of checks?



Not sure which checks you're talking about, so I can't really tell what you
want.


I meant checks like flagging attempts to go to beta with either of the
following:
  * Detector pref not being blank except for a specific white list of
particular values for the ru, uk, ja, ja-JP-Mac and zh-TW locales.
  * Fallback charset set to UTF-8 in any locale that doesn't already
have it set to UTF-8.



I'm doing a source-based review, which at least catches regressions to 
those settings.


And I think we're doing charset detector settings wrong. Let me see if I 
get right what we're doing:


- most content should be labeled for charset
- if not, let's see if we can guess the encoding
-- if we assume the language of the content, we can guess better
-- many languages really only have one option
-- ru, uk, ja, zh-TW do have options, so we use a charset detector

Now, I don't think it's right to use the UI language to guess content 
language. We have a list of user-preferred languages (with good defaults 
based on UI language). We should go through that list, and pick charsets 
to try for unlabeled content from there.


That's rather orthogonal to what you're currently trying to do, but it's 
also indicating to me that we should remove all of those settings from 
intl.properties, and just leave accept-lang, and deduce the rest.


You also mentioned in the bug that you didn't get the OK to use 
telemetry to gather further data. I think if we just collect the data 
about the charset optimization and how good it's doing, we should be OK. 
I.e., at the point where the locale doesn't matter, but just cp-1252 
etc, the entropy goes up a good deal. In particular for small locales. 
I'd argue that this might even make sense to be part of health report.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Review of changes to Web compat-sensitive prefs in localizations

2013-02-22 Thread Axel Hecht

On 22.02.13 15:37, Henri Sivonen wrote:

I've been finding and, to a lesser extent, reporting and writing
patches for bugs where a localization sets the fallback encoding to a
value that doesn't suit the purpose of the fallback. In some cases,
there such bogosity in the intl.properties file (e.g. translation of
the word windows as part of a charset label) that I suspect that
changes to intl.properties have been landing without review.

I propose we adopt a rule that says that localizations need review
from the HTML parser module owner (i.e. me) to change the values of
preferences that modify the behavior of the HTML parser. (In practice,
this means the localizable properties intl.charset.default and
intl.charset.detector.)

Opinions?



I don't think that .platform is the right group to discuss policies for 
l10n, tbh.


Anyway, I don't think that it requires your review. For one, these rules 
just don't work in practice. We're facing the very same problem with 
search engines. There's just no other way than post-mortem work. That's 
one of the reasons why we're not taking arbitrary changesets to ship to 
any audience beyond aurora and nightly, for beta and release, we got to 
have technical checks in place.


I usually catch regressions to intl.properties when reviewing requests 
for updates to those changesets.


That said, I don't know what intl.charset.detector should be set to, 
aside from nothing. Looking at your patch, the comment doesn't make that 
clearer, too, I'll follow up there.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Review of changes to Web compat-sensitive prefs in localizations

2013-02-22 Thread Axel Hecht

On 22.02.13 20:02, L. David Baron wrote:

On Friday 2013-02-22 16:37 +0200, Henri Sivonen wrote:

I've been finding and, to a lesser extent, reporting and writing
patches for bugs where a localization sets the fallback encoding to a
value that doesn't suit the purpose of the fallback. In some cases,
there such bogosity in the intl.properties file (e.g. translation of
the word windows as part of a charset label) that I suspect that
changes to intl.properties have been landing without review.


It might not be a bad idea to have a better explanation in
http://mxr.mozilla.org/mozilla-central/source/toolkit/locales/en-US/chrome/global/intl.properties
of why one would want to change intl.charset.default and
intl.charset.detector, explaining clearly that they should only be
set to interesting values to deal with a substantial body of
legacy content that requires those values, and then saying what they
should be in the absence of such legacy content (the latter should
clearly be empty; I'm not sure whether the former should be UTF-8 or
ISO-8859-1, but we should have a consistent policy).

That said, I don't actually know whether the tools localizers use to
do localization lead them to read the text.

The reality is that I suspect it may be important for you to do
occasional audits of these values; it could also be valuable to have
a tool that exposes all of them in a single place (perhaps even a
place with history, like an automatically-generated wiki page).

-David



Henri filed https://bugzilla.mozilla.org/show_bug.cgi?id=844042 before 
posting here (or at least around the same time).


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sparse localizations of toolkit and the chrome registry's getSelectedLocale()

2013-02-20 Thread Axel Hecht

On 12.02.13 22:12, Axel Hecht wrote:

On 12.02.13 20:27, Benjamin Smedberg wrote:

On 2/12/2013 12:41 PM, Axel Hecht wrote:

Hi Benjamin, Dave,

for mobile (and fxos) we're looking into doing sparse localizations of
toolkit. The work on the mobile side is bug 792077. The current
attempt is to leave the 'global' and other toolkit packages
untranslated, and use chrome overrides to supply a few localized
files, as needed by fx-android or -os.

That's all kinda nice and shiny, but it horribly breaks
getSelectedLocale(package), as that returns 'en-US' for 'global' and
friends. Breaks all kinds of url reporters and rtl detection.

Why does getSelectedLocale(package) matter? Who uses that API to make
important decisions? And can we just change the callers to do something
different?

--BDS



http://mxr.mozilla.org/mozilla-central/ident?i=getSelectedLocalefilter=
is the call-sites.

There's a bunch of .getSelectedLocale(global) to determine the
currently active locale, independent of OS settings on linux.


Some more detail: We'd need some package to ask here. We might get away 
abstracting this into an API on the chrome registry, and for apps with 
sparse l10n, have an overload pref that determines the package to look 
at. I actually started a patch on this one, but ...



The other half of it is
http://mxr.mozilla.org/mozilla-central/ident?i=IsLocaleRTL, which is a
tad harder to resolve. That depends all over the place.


... realized that I have no idea on how to fix these.

Benjamin?

Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Sparse localizations of toolkit and the chrome registry's getSelectedLocale()

2013-02-12 Thread Axel Hecht

Hi Benjamin, Dave,

for mobile (and fxos) we're looking into doing sparse localizations of 
toolkit. The work on the mobile side is bug 792077. The current attempt 
is to leave the 'global' and other toolkit packages untranslated, and 
use chrome overrides to supply a few localized files, as needed by 
fx-android or -os.


That's all kinda nice and shiny, but it horribly breaks 
getSelectedLocale(package), as that returns 'en-US' for 'global' and 
friends. Breaks all kinds of url reporters and rtl detection.


Next step is to figure out if there's an acceptable way to make this 
work still. I had three ways in my head:
- make the chrome urls/channels fall back to en-US and really have 
sparse toolkit localizations. This sounds really intrusive, and probably 
hard.
- make all call-sites of getSelectedLocale work around that, which I 
don't think is feasible.
- add a prefbranch that gives package overrides or so, so that the 
chrome reg would return the selected locale for the override package 
instead of the one it's getting asked for, something like

pref(chrome.locale.override.global, browser);

https://bugzilla.mozilla.org/show_bug.cgi?id=792077#c17

I personally favor the third option. Does that sound feasible to you?

Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sparse localizations of toolkit and the chrome registry's getSelectedLocale()

2013-02-12 Thread Axel Hecht

On 12.02.13 20:27, Benjamin Smedberg wrote:

On 2/12/2013 12:41 PM, Axel Hecht wrote:

Hi Benjamin, Dave,

for mobile (and fxos) we're looking into doing sparse localizations of
toolkit. The work on the mobile side is bug 792077. The current
attempt is to leave the 'global' and other toolkit packages
untranslated, and use chrome overrides to supply a few localized
files, as needed by fx-android or -os.

That's all kinda nice and shiny, but it horribly breaks
getSelectedLocale(package), as that returns 'en-US' for 'global' and
friends. Breaks all kinds of url reporters and rtl detection.

Why does getSelectedLocale(package) matter? Who uses that API to make
important decisions? And can we just change the callers to do something
different?

--BDS



http://mxr.mozilla.org/mozilla-central/ident?i=getSelectedLocalefilter= 
is the call-sites.


There's a bunch of .getSelectedLocale(global) to determine the 
currently active locale, independent of OS settings on linux.


The other half of it is 
http://mxr.mozilla.org/mozilla-central/ident?i=IsLocaleRTL, which is a 
tad harder to resolve. That depends all over the place.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla-central/inbound closed -- we hit the Windows PGO memory limit

2013-01-22 Thread Axel Hecht

How are the perf numbers looking?

One of the reasons for asking is that I expect RDF to be part of the 
startup and window-open codepaths, at least.


I'm not overly concerned, but wanted to make sure we look.

Axel

On 22.01.13 15:06, Ehsan Akhgari wrote:

Status update #3:

It seems like with PGO disabled for all of the above modules, we've now
decreased the linker max vmem size by about 500MB, which is nice.  There is
one PGO build bustage 
https://tbpl.mozilla.org/php/getParsedLog.php?id=19006659tree=Mozilla-Inbound
which has been re-triggered, and I think we should wait to make sure that
it goes green, but then we should be able to reopen mozilla-inbound
temporarily, with mozilla-central following when we merge inbound to
central the next time.  We should get the results of the re-triggered build
in about two hours.  Stay tuned!

Cheers,

--
Ehsan
http://ehsanakhgari.org/


On Mon, Jan 21, 2013 at 11:32 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:


Second status update:

The numbers from disabling PGO on image, accessible and webrtc are in, and
the linker max vmem size is down by only ~200MB, which is quite
disappointing, especially since according to Randell, putting webrtc
outside of libxul should buy us something around 600MB...

So, as desparate times require desparate measures, I went ahead and
disabled PGO on the following components as well: rdf (the original patch
there busted the tree so I backd it out), editor, svg, mathml, xslt,
embedding, storage, and the old HTML parser.  I will not be awake long
enough tonight to see what the progress would look like, but those
interested can follow along here: 
https://tbpl.mozilla.org/?tree=Mozilla-Inboundjobname=WINNT%205.2%20.*%20pgo-build

.


I'm planning to keep the tree APPROVAL REQUIRED for now.  I will
re-evaluate the situation tomorrow, but I do expect that we will be able to
temporarily reopen the tree tomorrow.  In the mean time, if you can think
about more components which will not be causing a big performance problem
by disabling PGO on them, please file a bug and make it block bug 832992
(and even better, copy a file like this to their top-level directory to
disable PGO on them:
https://hg.mozilla.org/integration/mozilla-inbound/file/357b9a855e10/rdf/defs.mk
).

Thanks!

--
Ehsan
http://ehsanakhgari.org/


On Mon, Jan 21, 2013 at 5:36 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:


Status update: we have landed three patches on mozilla-inbound which
disable PGO on the following directories (rdf/, image/ and accessible/) and
I have triggered PGO builds on top of them to see how much they can shave
off of the linker's vmem usage.  Randel is also working on taking some
webrtc code out of libxul in the mean time.

If all of this proves to be ineffective, we can look into de-PGO-ing more
code.

Cheers,
Ehsan






___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Integrating ICU into Mozilla build

2012-12-06 Thread Axel Hecht

On 07.12.12 01:08, Asa Dotzler wrote:

On 12/3/2012 2:39 PM, Norbert Lindenberg wrote:

Well, the first question is what size increase would be acceptable
given the benefits that ICU provides.


I don't understand what benefits this actually provides. How are users'
online lives improved by this change, either today or in the future?

Adding to the download size costs us in user acquisition so we cannot be
OK with taking on megabytes of additional download size for features of
questionable value.


I think there are folks outside of mozilla that have been evaluating the 
app development, and then said to make metro a compelling ecosystem for 
js apps, we need at least X apis for internationalized Y. That's what's 
shaping the js i18n api. Nobody ever said that literally, but it's been 
inbetween every two lines.


I don't think it serves us good to debate the necessity of the API.

I think that other competitors implement this for the languages they 
have on the device, not so much for the languages on the web.


I think this is a challenge for us, and our approach to languages on the 
web in general. But I do think it's essential that we take on that 
challenge and win.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extending jar.mn to support cross-module overrides

2012-10-03 Thread Axel Hecht
I've looked a bit deeper into the code, and there's unused functionality 
that I'd like to rip out of JarMaker.py in favor of this:


Support for multiple jars in one go is one thing I'd love to axe. I've 
probably added that thinking we could one day just fire one jarmaker for 
all of a language pack, but that doesn't need to be in jarmaker itself, 
if we'd ever do that.


I'm tempted to drop support for processing stdin, too.

Much of that was there to be backwards compat, but these days we only 
have two entry points into JarMaker, both passing in a single file on 
disk (rules.mk and mobile's custom built search-jar).


Ted?

I'd hack on that, fwiw, and I'd do so quickly, as we'll want this in 18 
for b2g :-/


Axel

On 01.10.12 21:13, Axel Hecht wrote:

Hi,

for both android and b2g, we end up only needing a handful of localized
files from toolkit.

I propose to extend JarMaker.py and jar.mn to support something like a
fake relativesrcdir, say

@relativesrcdir toolkit/locales

and then the following lines would pick up files from toolkit/locales
instead of b2g/locales or mobile/android/locales.

Does that sound sane to folks?

Axel

Bugs affected:
android, https://bugzilla.mozilla.org/show_bug.cgi?id=792077
b2g, https://bugzilla.mozilla.org/show_bug.cgi?id=796051


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extending jar.mn to support cross-module overrides

2012-10-03 Thread Axel Hecht

On 03.10.12 14:33, Mike Hommey wrote:

On Wed, Oct 03, 2012 at 02:01:02PM +0200, Axel Hecht wrote:

I've looked a bit deeper into the code, and there's unused
functionality that I'd like to rip out of JarMaker.py in favor of
this:

Support for multiple jars in one go is one thing I'd love to axe.
I've probably added that thinking we could one day just fire one
jarmaker for all of a language pack, but that doesn't need to be in
jarmaker itself, if we'd ever do that.

I'm tempted to drop support for processing stdin, too.

Much of that was there to be backwards compat, but these days we
only have two entry points into JarMaker, both passing in a single
file on disk (rules.mk and mobile's custom built search-jar).

Ted?

I'd hack on that, fwiw, and I'd do so quickly, as we'll want this in
18 for b2g :-/


Note that bug 780561 will make JarMaker always output flat, at least
when building firefox and firefox-l10n.


That shouldn't be a problem for the gecko strings, they'll just be where 
you'd expect them, with multiple locale codes and manifest files. I'd 
actually expect things to become easier if the packager picks up files 
directly from what's in the manifest files, as long as we can point it 
to a list. Then we could avoid the hack over at 
http://mxr.mozilla.org/mozilla-central/source/mobile/android/installer/Makefile.in#71 
?


How does that impact the langpack-% target, though?

Axel


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extending jar.mn to support cross-module overrides

2012-10-03 Thread Axel Hecht

On 03.10.12 15:41, Mike Hommey wrote:

On Wed, Oct 03, 2012 at 02:54:19PM +0200, Axel Hecht wrote:

On 03.10.12 14:33, Mike Hommey wrote:

On Wed, Oct 03, 2012 at 02:01:02PM +0200, Axel Hecht wrote:

I've looked a bit deeper into the code, and there's unused
functionality that I'd like to rip out of JarMaker.py in favor of
this:

Support for multiple jars in one go is one thing I'd love to axe.
I've probably added that thinking we could one day just fire one
jarmaker for all of a language pack, but that doesn't need to be in
jarmaker itself, if we'd ever do that.

I'm tempted to drop support for processing stdin, too.

Much of that was there to be backwards compat, but these days we
only have two entry points into JarMaker, both passing in a single
file on disk (rules.mk and mobile's custom built search-jar).

Ted?

I'd hack on that, fwiw, and I'd do so quickly, as we'll want this in
18 for b2g :-/


Note that bug 780561 will make JarMaker always output flat, at least
when building firefox and firefox-l10n.


That shouldn't be a problem for the gecko strings, they'll just be
where you'd expect them, with multiple locale codes and manifest
files. I'd actually expect things to become easier if the packager
picks up files directly from what's in the manifest files, as long
as we can point it to a list. Then we could avoid the hack over at 
http://mxr.mozilla.org/mozilla-central/source/mobile/android/installer/Makefile.in#71
?


Actually, that hack can go away with the new packager, as long as all
locales manifests are included in chrome.manifest.


How does that impact the langpack-% target, though?


In practice, it changes nothing, because we already use flat chrome
format for dist/bin. What changes with bug 780561 is that even for a
final jar chrome format in dist/$APPNAME, we'll be using a flat chrome
format in dist/bin. So JarMaker won't have to output jars directly.


the langpack-% target doesn't do anything in dist/bin, it's doing stuff like

@$(MAKE) -C ../../services/sync/locales AB_CD=$* XPI_NAME=locale-$* 
BOTH_MANIFESTS=1


The logic starts in toolkit/locales/l10n.mk's langpack-% rule.

Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Extending jar.mn to support cross-module overrides

2012-10-01 Thread Axel Hecht

Hi,

for both android and b2g, we end up only needing a handful of localized 
files from toolkit.


I propose to extend JarMaker.py and jar.mn to support something like a 
fake relativesrcdir, say


@relativesrcdir toolkit/locales

and then the following lines would pick up files from toolkit/locales 
instead of b2g/locales or mobile/android/locales.


Does that sound sane to folks?

Axel

Bugs affected:
android, https://bugzilla.mozilla.org/show_bug.cgi?id=792077
b2g, https://bugzilla.mozilla.org/show_bug.cgi?id=796051
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >