Heads up: Fluent will now block layout

2019-01-28 Thread zbraniecki
Hi all,

We've just landed a pretty important change to how we localize our UI [0].

Starting from this week (including 66), Fluent (async and sync) will now block 
layout in all three of XUL/XHTML and HTML.

That means that flashes of untranslated content (FOUC) should not be possible 
anymore.

Please, help us test the new behavior and alert me or smaug if you see any 
other impact of the change.

Thanks,
zb.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1518252
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Localized Repacks now visible on (most) pushes

2018-05-30 Thread zbraniecki
Congratulations Justin!

Excited to see this coming all together. With this change, we can now both 
improve our software quality and culture of paying attention to red L10n in 
treeherder :)

Thank you!
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: HTML injection in chrome documents is now automatically sanitized

2018-02-09 Thread zbraniecki
On Friday, February 2, 2018 at 2:11:02 AM UTC-8, Gijs Kruitbosch wrote: 
> In the further future, I expect this type of problem will go away 
> entirely because of Fluent.


That's correct! Fluent brings the concept of DOM Overlays which allow for safe 
mixing between developer provided DOM fragments and localization.

We're currently completing the feature set of DOM Overlays to allow for element 
ordering (to allow translations to reorder elements in a localizable fragment), 
but the core functionality is in tree and is used in the current migration of 
Preferences to Fluent.

It's all sanitized and safe following W3C localization guidelines. :)

zb.
p.s. the timeline for when you'll be able to use Fluent is a bit in flux. We're 
currently testing Fluent migrating Preferences to it and need a bit more time 
to gain confidence that all parts of the system are ready - we wouldn't want 
you to start using it for your component and then have to tell you to stop, 
because some feature is not ready yet. Hope to unblock Fluent for new code soon!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-03 Thread zbraniecki
On Friday, February 2, 2018 at 1:57:34 PM UTC-8, Nicholas Nethercote wrote:
> It shouldn't be too hard, because the prefs grammar is very simple. I would
> just implement "panic mode" recovery, which scans for a synchronizing token
> like ';' or 'pref' and then continues from there. It's not foolproof but
> works well in many cases.

We do quite heavy error recover in the new l10n format. 

The way we handle it is that if we encounter an error while retrieving an 
entry, we collect it as an error[0], and skip to the start of the next one 
recognized by the first line that starts with an ID[1].

I assume the same would work for prefs (even easier, because the line has to 
start with `pref`).

zb.

[0] https://searchfox.org/mozilla-central/source/intl/l10n/MessageContext.jsm#63
[1] 
https://searchfox.org/mozilla-central/source/intl/l10n/MessageContext.jsm#957
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Default developer mozconfig with clang

2017-12-07 Thread zbraniecki
I'm looking for a good default setup for building a debug Firefox using clang 
(5.0) that is at the same time usable.

With GCC I hit a good sweet spot with:

```
mk_add_options "export RUSTC_WRAPPER=sccache" 
mk_add_options 'export CARGO_INCREMENTAL=1' 

ac_add_options --with-ccache

ac_add_options --enable-optimize="-g -Og"
ac_add_options --enable-debug-symbols
ac_add_options --enable-debug
```

but the same settings when I add:

```
export CC="clang"
export CXX="clang++"
```

give me a much, much slower build that feels very slow to start and UI is very 
noticeably slower.

Any recommendations?

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Building mozilla-central with clang + icecream

2017-11-06 Thread zbraniecki
I tried to build m-c today with clang 5.0 and icecream using the following 
mozconfig:


```
mk_add_options MOZ_MAKE_FLAGS="-j$(icecc-jobs)"

mk_add_options 'export CCACHE_PREFIX=icecc'
mk_add_options "export RUSTC_WRAPPER=sccache" 

export CC=clang
export CXX=clang++

ac_add_options --with-ccache

```

The result is an error during config:

```
 0:02.58 checking the target C compiler version... 5.0.0
 0:05.91 checking the target C compiler works... no
 0:05.91 DEBUG: Creating `/tmp/conftest.oIC8nq.c` with content:
 0:05.91 DEBUG: |
 0:05.91 DEBUG: | int
 0:05.91 DEBUG: | main(void)
 0:05.91 DEBUG: | {
 0:05.91 DEBUG: |
 0:05.91 DEBUG: |   ;
 0:05.91 DEBUG: |   return 0;
 0:05.91 DEBUG: | }
 0:05.91 DEBUG: Executing: `/usr/bin/ccache /usr/bin/clang -std=gnu99 -c 
/tmp/conftest.oIC8nq.c`
 0:05.91 DEBUG: The command returned non-zero exit status 127.
 0:05.91 DEBUG: Its error output was:
 0:05.91 DEBUG: | usr/bin/clang: error while loading shared libraries: 
libLLVM-5.0.so: cannot open shared object file: No such file or directory
 0:05.91 DEBUG: | ICECC[8371] 17:45:53: Compiled on 10.251.24.73
 0:05.91 ERROR: Failed compiling a simple C source with the target C compiler
 0:05.94 *** Fix above errors and then restart with\
 0:05.94"/usr/bin/make -f client.mk build"
 0:05.94 make[2]: *** [/projects/mozilla-unified/client.mk:2
```

I know I can build using gcc+icecream or using clang without icecream, but does 
anyone know how to combine clang and icecream to make it work?

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Pulsebot in #developers

2017-11-05 Thread zbraniecki
W dniu niedziela, 5 listopada 2017 08:18:37 UTC-8 użytkownik Gijs Kruitbosch 
napisał:

> More generally, in response to Philipp's points, I'm not convinced 
> pulsebot is responsible for any decrease in responsiveness. Early in the 
> day here in Europe (ie when it's night in the Americas) the channel is 
> usually dead (besides any pulsebot chatter). I think removing pulsebot 
> will just make it... deader. :-)

I suspect a reverse causation here. We're all creatures of habits, and since 
most of the time you the first thing you see when you look at #developers is a 
flock of pulsebot messages, you're unlikely to write there seeking 
communication with fellow humans.
And over time, you just find other means of communication like more specific 
channels Kris mentioned.

That make sense for full-timers, who live and breathe our modules, but please, 
be careful because #developers is also the most natural channel for any 
newcomers to go to in order to ask entry level questions about our codebase.

I believe that if we do not change anything, then #developers in fact is 
becoming a "#m-c_commits" channel anyway with no general channel for any 
developer related chatter.

If we move the pulsebot per-commit reporting to a new channel, all you have to 
do to keep getting notifications is join it, and I believe we'd see an increase 
in communication esp. between full-timers and volunteers.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Enable: Automated Static Analysis feedback in MozReview

2017-10-17 Thread zbraniecki
This is awesome! As an engineer who has to work with C++ until we advance Rust 
bindings, I always feel terrible when my reviewers spend their precious time 
identifying simple C++ errors in my code.


Seeing the advancements in static analysis for C++, rustfmt and eslint for JS, 
I'm wondering if there's a place to collect a less strict "best practices" 
analysis - more similar to rust's clippy than fmt.

In Intl/L10n land, we have a bunch of recommendations that are very hard to 
enforce since they spread between JS, C++ and soon Rust regarding language 
selection, manipulation, testing of intl output etc.
I'm wondering if there's a place to get those kind of "automated feedback" 
patterns.
A few examples of what I have on my mind:

 - We'd like people to not use "general.useragent.locale" to manipulate app 
locale anymore, but rather use 
Services.locale.getRequestedLocales/setRequestedLocales.
 - We'd like to make sure people don't write unit tests that test particular 
output out of Intl APIs (that makes our tests locked to only work in one 
language and break every time we update our datasets - that's a big no-no in 
the intl world)
 - We'd like to discourage people from altering app locales, and rather test 
against updated required locales.
 - Soon we'll want to recommend using DOMLoclaization.jsm/Localization.jsm API 
over StringBundle/DTD.

Those things can be encoded as regexps, but they span across programming 
languages (XUL, XHTML, HTML, XBL, DTD, JS, C++).

Is there any mozilla-clippy project being planned? :)

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Switching language packs to WebExtensions

2017-10-10 Thread zbraniecki
Hi dev.platform,


We're planning to switch language packs to use the new WebExtensions platform 
somewhere down this week [0].


This is a very important change on that enables us to enable the first 
milestone on the path to the new localization API [1].

In a longer term, we hope to promote language packs to first tier localization 
experience which would allow us to look into switching from building 100+ 
builds, to building a single build and 100+ langpacks.

In the short term, this makes our language packs support L20n, but also makes 
them safer, easier to maintain, and no longer a blocker to removal of the old 
addons code.

If you notice any issues with the new code, please, contact L10n Drivers team.

zb.


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1402061
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Pre-commit hook for adding new FTL localization files

2017-10-03 Thread zbraniecki
Hi all,

tl;dr - This will almost certainly not affect your work. We're adding a 
temporary pre-commit hook that requires L10n Drivers to r+ any patches that 
touch .ftl files.



As the waters calm down after 57 cycle, we're getting ready to start enabling 
the new localization API in Gecko. You've probably heard about it under the 
project code name L20n, while the API itself is named Fluent [0].

It's a big project and we are going to release it in multiple stages before we 
feel comfortable enough to enable everyone to use it.

At the core of it is a new localization file format that will replace .DTD and 
.properties. It uses extension `.ftl` which stands for `Fluent Translation 
List`.

In order to increase our ability to control the landing approach, we're going 
to land a new hook that will reject any patch that touches an .ftl file and 
doesn't have r+ from the following people:

 - :flod
 - :gandalf
 - :pike
 - :stas

This should not affect your work in any way, since over the next month or so 
we'll be only manually transitioning single files from obscure UI elements to 
minimize the risk and test-drive the new platform.

The bug for adding the hook is 
https://bugzilla.mozilla.org/show_bug.cgi?id=1394891

By All Hands we hope to be ready to remove the hook and enable everyone to use 
the new API. In the months to come, we'll be writing guidelines, tutorials, 
blog posts and other forms of prose[1] to get you all familiar with what 
changes and how to review patches for the new system.

Stay tuned!

zb.

[0] http://projectfluent.io/
[1] We're looking for skilled rappers and haiku artists with experience in tech 
rhymes.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Heads up: pre-ICU code is being removed in Firefox 58

2017-09-22 Thread zbraniecki
Hi Team,

We're currently working on removing all the code that we had for building Gecko 
and SpiderMonkey without ICU.

ICU is our core internationalization library, and CLDR our core 
internationalization database for both internal and external (think, ECMA402) 
use.

In Firefox 56 we moved the last platform (Android) to build with ICU and two 
releases later we feel ready to remove all the legacy code. It's a nice save 
(over 20k LOC), and not having to support both modes is a relief for a lot of 
us working on intl code.

The current effort is to eradicate all the code around ENABLE_INTL_API==no.

The tracking bug is here: https://bugzilla.mozilla.org/show_bug.cgi?id=1387332

It should not affect you in any way, unless your ways are fairly esoteric, in 
which case, please reach out to :m_kato, :jfkthame or myself.

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: Unified Bootstrap Stages for Gecko

2017-09-06 Thread zbraniecki
On Tuesday, September 5, 2017 at 8:18:40 AM UTC-7, Mike Conley wrote:
> We should also consider adding a milestone for the "hero element" for
> the browser. There's some discussion fragments in bug 1369417 about this
> (which I see you're already involved in! Great!), captured from a very
> caffeinated discussion with jmaher, florian and a few other folks in SF
> this past all-hands.

I think we can dissect it in several ways:

1) We can separate firstContentfulPaint (any DOM is painted) from 
firstMeaningfulPaint (the paint that contains the selected hero element(s)).

I would be happy to see it as a progression. Once again, this way we can 
instrument our own code *and* our tp6 suite by marking which elements are 
"meaningful" and mark the timestamp only when those were accounted for in a 
paint that was completed.

The difference between firstPaint and firstContentfulPaint is the time when the 
page was "blank" but something (from  or manifest, so title or 
background) indicated that the load is successful.

The difference between firstContentfulPaint and firstMeaningfulPaint would be 
the time when the document was being reflowed. I expect that in most cases, and 
definitely in case of browser.xul, those two (actually, those three) will stay 
the same.
But you can imagine that we could then, say, switch URL bar to be injected from 
JS, and since it's marked as meaningful, we'd mark the timestamp only when this 
element was accounted for in the paint.

2) There's a distinction between visible and interactive.

In theory, we could paint, say the url bar, but until its JS is ready and 
hooked, it's not interactive.

I was thinking about defining the "uiInteractive" mark to be set when the JS 
required for making the meaningful UI pieces is ready. That could be up to 
Firefox UI owners discretion - maybe URL bar and tabbar is enough, maybe we 
want to wait for something more.

> 
> At any rate, this all sounds strictly better than what ts_paint
> currently captures. We just need to ensure that we measure paint times
> when they're presented to the user (so after composite), and using the
> timestamps on the MozAfterPaint events themselves[1] (and not taking a
> timestamp at the event-servicing time, as this adds noise and padding to
> the timestamp).

Yeah. That seems the hardest part for me, since the required instrumentation 
goes far beyond my skills.
In bug 1388157 comment 5 [0] Markus described what has to happen for us to make 
us register full paints (including composition) timestamps involving given 
requirements.

> So, uh, thumbs up from me. :)

In comment 2 of the same bug, Markus pointed out that we'd need a product / 
metrics decision on it.

I'd be happy to formalize the proposal into a document and build a plan for 
implementing it (starting with a new firstPaint that will unify all 
firstpaints!). But I believe before I invest that time, I'd like to get 
consensus among platform, metrics, and graphics engineers to know that this 
proposal is something they'd be willing to work toward.

zb.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1388157
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal: Unified Bootstrap Stages for Gecko

2017-08-31 Thread zbraniecki
Gecko has a pretty substantial number of metrics used for measuring startup 
performance.

We have probes in Telemetry [0], StartupTimeline [1], various uses of 
MozAfterPaint, both in talos tests [2] and in production code [3][4].
We also have first paint in compositor [5], before-first-paint [6], 
timeToNonBlankPaint [7] which should not be confused with firstNonBlankPaint 
[8] and probably a number of other probes that register different timestamps 
and are used all around to mean different things. Some measure layout, others 
composition. Many of them are misused by capturing the timestamp in the 
callback to event listener fired asynchronously post-event.
We end up seeing huge swings in some "first-paint" [9] that are not 
reproducible in another "first-paint" [10], and we know that things like 
WebRender may not affect some of our "first-paint" because it measures only 
part of paint that WebRender doesn't affect[11].

It doesn't help that some of them are chrome-only while others are available in 
content.

I believe that, while we can recognize the complexity of the systems in play 
and how this complexity explains why different probes ended up being 
constituted, this situation is counter productive to our ability to understand 
the performance implication of our changes in product.

I'd like to suggest establishing a single set of timestamps with unique names 
to represent various stages of the product launch.
Those timestamps would be described based on the user-perceived results and as 
such serve us as best-effort approximations of the impact of any change on the 
user-perceived experience.

In particular, my proposal is based on WICG Paint Timing proposal [12] and 
establishes the 5 major timestamps to be executed at the latest event that 
contributes to the user-perceived outcome.
For example, when selecting when to mark "xPaint" event, we will use the 
consumer notion of the term "paint" and mark it after *all* operations required 
for the paint to happen are done - layout, composition, rendering and paint.

My proposal is also based on the work on negotiated performance milestones 
established for Firefox OS project [13].

The proposed milestones are:

1) firstPaint

This milestone happens when the first paint that is influenced by the data for 
the measured object is completed by the engine (and likely submitted to the 
graphic driver).

In the context of an HTML document, the first paint that is affected by the 
document's background or title is completed.

2) firstContentfulPaint

The first contentful paint of browser.xul happens when the first paint that 
includes layout of DOM data from browser.xul is completed.

3) visuallyCompletedPaint

This milestones is achieved after the first paint with the above-the-fold part 
of the DOM ready is submitted. This event may require the document to inform 
the engine that all the items in the visual field are ready, and the next paint 
captures the timestamp.

4) chromeInteractive

This milestone is achieved when the app reports the UI to be ready to be 
interacted with. The definition of what constitutes of the UI being ready is up 
to the app, and may just include the URL bar being ready to receive URL input, 
or may wait for all core parts of the product to have event handlers attached 
(urlbar, tabbar, main menu etc.)
This milestone may be reached before (3), but not after (5).

5) fullyLoaded

This milestone also may require data from the document and it should mark the 
timestamp when all startup operations are completed.
This should include delayed startup operations that may have waited for 
previous stages to be achieved, but should not wait for non-startup delayed 
operations like periodic updates of data from AMO etc.

The last milestone is a good moment to reliably measure memory consumption 
(possibly after performing GC), take a screenshot for any tests that compare UI 
between starts and so on.

Generally speaking, (5) is when we naively can say "the app is fully launched".


==

This system would require us to provide a way for each document to inform the 
engine when some of the later stages are reached. Then the engine would take 
the next full paint and capture the timestamp.
Such timestamp list would be available for chrome and behind a flag for 
content, to be used by tests.

The value of such milestone approach lies not only in unification of reading, 
but also easier hooking of code into the bootstrap process. Having such a 
consistent multi-stage bootstrap allows developers to decide at which stage 
their code has to be executed to delay only what has to be affected by it and 
in result developing a culture of adding code that doesn't affect early stages 
of the bootstrap unnecessarily.

Lastly of course, the value comes in our ability to say that all telemetry 
probes, talos tests, tp6 tests etc. and automation can now rely on a single set 
of timestamps which would increase developers ability to 

Re: More Rust code

2017-07-10 Thread zbraniecki
One more thought. There's a project that fitzgen told me about that aims to 
allow for components to communicate between JS and Rust using Streams.

If we could get to the point where instead of WebIDL/XPIDL we could just plug 
streams between JS/CPP and Rust in Gecko, I believe the scope of Gecko 
components that can be written in Rust would skyrocket.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


inlining JS code at build time

2017-07-05 Thread zbraniecki
I'm working on a new feature which will add a new JS class.

I can grow the main file (mozIntl.js) or add it in a separate file 
(mozIntlLocale.js).

For readability, I think it would be nicer if I could add it as a separate file 
(I like to keep my files under 500 lines), but I don't want us to pay the 
runtime price of loading the file via JSM or XPCOM interfaces.

Is there anything like a RollupJS in our build system? Or is there any plan to 
add it?

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


LocaleService::GetRequestedLocale(s) / SetRequestedLocales

2017-04-18 Thread zbraniecki
Hi all,

The latest update in the locale rearchitecture is that we now have three 
methods on LocaleService that should be used to operate on the requested 
locales:

LocaleService::GetRequestedLocales
LocaleService::GetRequestedLocale
LocaleService::SetRequestedLocales

Please, use them instead of directly manipulating `general.useragent.locale` 
pref.

In the future we'll want to move away from this pref toward a list of locales.

Thanks!
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozIntl.DateTimeFormat

2017-04-10 Thread zbraniecki
On Sunday, April 9, 2017 at 9:50:49 PM UTC-7, gsqu...@mozilla.com wrote:

> How is the browser UI locale set/chosen? If based on OS locale settings, 
> great!

It's currently based on the selection of pref called "general.useragent.locale" 
negotiated against resources available in ChromeRegistry.

This will be slowly morphing into a list of fallback locales selected by the 
user in Preferences.
 
> However, if based on (I guess) downloaded version:
> 
> Does that mean that Firefox will now ignore *my* preferred OS-wide settings? 
> (e.g.: 24h clock, -MM-DD dates.)

No, that precisely means the opposite. The new API does look into your OS-wide 
regional preferences and alters the date/time patterns to respect them.

We're in the process of migrating current calls to use the new API - see the 
three dependencies in bug 1354339 which all have patches in review.

Cheers,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


mozIntl.DateTimeFormat

2017-04-06 Thread zbraniecki
Hi all,

We completed the transition of the Intl handling from using OS locale, to use 
browser UI locale.

We have a new API called mozIntl.DateTimeFormat which should be the primary way 
we format date and time in our chrome.
You can think of it as a regular ECMA402 DateTimeFormat on steroids.

It gives us two "shorthand" options "dateStyle" and "timeStyle" which you can 
use instead of listing manually all options. This should lead to increased 
consistency of our UI. On top of that, those two options allow us to tap into 
OS regional settings to read any manual adjustments user made and respect them.

Imagine that the user changed time format from hour12 to hour24. 
mozIntl.DateTimeFormat will respect that and show the time in the current UI 
locale, but with this adjustment.

This step is crucial for better product localizability (because now the dates 
are in the same language as the rest of UI - think "Today is: April 5th" where 
"April 5th" comes from date formatting and "Today is:" from l10n - we want both 
to be in one language).

Example of how to use the new API:

```

let dtf = mozIntl.createDateTimeFormat(undefined, {
  dateStyle: 'long', // full | long | medium | short
  timeStyle: 'medium // full | long | medium | short
});

dtf.format(now);

```

Please, use the new API for all new code and when possible, migrate old code to 
use it.

Thanks!
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko Locale Selection remodel completed

2017-03-28 Thread zbraniecki
Hi Firefox devs!

We just landed the final change that culminates the 3 month long major refactor 
of how Gecko handles language selection [0] (pending autoland->central).

We now have a single new API - mozilla::intl::LocaleService [1] (and 
mozILocaleService [2]) - which centralizes all operations related to things 
like asking for languages that the user requested, which language resources are 
available and which languages have been negotiated for the app to use.
Also, all events related to those operations are now distributed from the new 
API.

On top of that, we gained an additional helper API for retrieving 
Internationalization-related infromation from the operating system - 
mozilla::intl::OSPreferences [3] (and mozIOSPreferences [4]).


Changes
===

We migrated all the code in mozilla-central to use the new code [5] for you, 
but if your work uses those areas, please make sure to take a look at those 
APIs and start using them in your code.
In particular, please use `LocaleService::getAppLocalesAs(BCP47|LangTags)` for 
all code that wants to follow the current UI language, and use LocaleService's 
`getRequestedLocales` and `setRequestedLocales` instead of manipulating 
`general.useragent.locale` pref directly [6].

JS Context and mozIntl APIs now use the current browser UI language selection 
instead of OS locales.

That change also marks the beginning of deprecation of the nsILocaleService, 
nsLocaleSerivice, nsLocale and related APIs [7].

Lastly, we've moved away from ChromeRegistry as the central place for 
negotiating language selection for the product (LocaleService takes over that 
role), which means that in almost all cases you should consult LocaleService, 
not ChromeRegistry.


What's Next
===

This last step opens up the ability for us to introduce a new localization 
resources registry [8] which will slowly take over that role from 
ChromeRegistry.

There are still a couple minor features we'll be adding to LocaleService over 
the next month [9], but generally, the API is complete and ready to handle 
centralized language management in Gecko.

Beyond cleaning up 20+ year old code, unifying the behavior and enabling the 
new registry, those changes put us on a path to more flexible multi-lingual 
behavior aligned with the modern Web.

>From here, we plan to make Gecko be able to:
 - decouple release of the product from the releases of language resources
 - handle localization of HTML/WebComponents/WebExtensions/XUL/XBL/JS using a 
single localization API
 - better align between language resources and Intl APIs (date/number 
formatting etc.)
 - deliver language resource updates on the fly
 - gain control over at what point in the UI loading we inject strings
 - change languages on fly
 - first step on the path to the end of .DTD and .properties
 - handle error scenarios better (death to the Yellow Screen of Death!)

Team responsible for the refactor: :gandalf, :jfkthame, :pike, and :stas.

Greetings,
zb.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1347306
[1] http://searchfox.org/mozilla-central/source/intl/locale/LocaleService.h
[2] 
http://searchfox.org/mozilla-central/source/intl/locale/mozILocaleService.idl
[3] http://searchfox.org/mozilla-central/source/intl/locale/OSPreferences.h
[4] 
http://searchfox.org/mozilla-central/source/intl/locale/mozIOSPreferences.idl
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=1334772
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1334772
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=1350102
[8] https://bugzilla.mozilla.org/show_bug.cgi?id=1333980
[9] https://bugzilla.mozilla.org/show_bug.cgi?id=1346877
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Async Iteration is available on non-release-only, for testing purpose

2017-03-27 Thread zbraniecki
That's super exciting!

The new localization resources registry module is being written with async 
generators in mind. I have the patch ready in the bug, which can be flipped to 
go async with 8 lines of code.

I know we're not planning to make it ride trains just yet, but if you need a 
real-world use case to profile the perf or memory, I think I have a candidate 
for you :)

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: ScrollTimeline

2017-03-25 Thread zbraniecki
On Saturday, March 25, 2017 at 8:02:36 PM UTC+1, Botond Ballo wrote:

> What you describe sounds like other types of timelines, linked to user
> gestures. There is mention of that on the wiki [1] [2], but no
> concrete proposal that I'm aware of. I would imagine contributions to
> the development of such a proposal would be welcome!

Thanks!

Yeah, it seems that the touch-based scrubbing is the closest to what I thought 
of. It's not my idea, of course. I just really like the UX of the material 
design from Google where the animation is linked to the progress of some touch 
event.

It's not only pleasant to look at and play with, but it also lowers the 
cognitive confusion factor, since the visual stimuli is directly linked, by 
both trigger and progress, user action, instead of "happening on its own".
Lastly, it works really well as tutorial since user can slow down, or pause in 
the middle of the action and the animation slows down, or pauses, in response. 
User can even reverse the motion and the animation will follow.
This all in my, naitve, UX stufy on a few of my less technically gifted 
friends, did wonders to their ability to gain sense of comfort in understanding 
the UX paradigms of the software.

I'd love to see the Web gain this capability, and of course, to see Firefox UI 
use that.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Preferences::RegisterCallback and variants will now do exact, not prefix, matches

2017-03-22 Thread zbraniecki
On Tuesday, March 21, 2017 at 7:46:07 PM UTC-7, Boris Zbarsky wrote:
> Are you properly handling the fact that AddStrongObserver watches all 
> prefs starting with the prefix you pass it?  ;)

I don't, and I'd love not to. I know perfectly well this two strings I want to 
watch only them.

I don't think there's a high risk of someone adding a new string to 
"general.useragent.locale", but if I could narrow it down to just this string 
instead of treating it as a prefix, I'd like to.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Preferences::RegisterCallback and variants will now do exact, not prefix, matches

2017-03-21 Thread zbraniecki
Is there a reason we should use RegisterCallback over AddStrongObserver?

I have a fresh API where I'm using AddStrongObserver at the moment, but would 
be happy to switch if that will be cheaper / more future-compatible.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-13 Thread zbraniecki
On Monday, March 13, 2017 at 7:45:44 AM UTC-7, Byron Jones wrote:
> David Burns wrote:
> > We should try mitigate the security problem and fix our nit problem 
> > instead of bashing that we can't handle re-reviews because of nits.
> one way tooling could help here is to allow the reviewer to make minor 
> changes to the patch before it lands.
> ie.  "r+, fix typo in comment before landing" would become "r+, i fixed 
> the comment typo"

I don't think it's realistic to expect already-overloaded reviewers to do even 
more.

In my experience, reviewer's time is worth ~4 times more than patch author's 
time.
That's a completely arbitrary number, but represents how in my experience the 
load balance work.

So, I'd actually say we should do everything possible to *minimize* the amount 
of time required from the reviewer, rather than increasing it.

And I also don't think that increasing the number of reviewers would fix it. 
Reviewer by nature is often a senior engineer trying to balance out writing 
patches that very few people can, and review patches that less experienced 
engineers wrote.

Their time is insanely valuable because neither of those tasks can be easily 
done by the person requesting the review.

Of course there are exceptions like peer-reviews and rubber-stamping of a 
patch, but in general, I'd like us to think about shifting the burden onto 
automation / patch author to do as much work as possible before the reviewer 
commits their time.
And once their done, once again we should imho limit the time we expect from 
the reviewer for any follow-up reviews.

For that reason, the lack of interdiff in rebase scenario in MR is a major 
hassle in my experience. And the idea that the reviewer has to re-review 
multiple times or edit the patch themselves, as a step in the wrong direction.

Also, the idea that "anyone can re-review the patch" is very shaky. It would 
not work in the most crucial and delicate areas where the number of people 
familiar with the area is just low. Say, accessibility, graphics, 
internationalization, security etc.

In those lines, there's often a single person in the organization who can 
comfortably review the patch, and if they're in a different timezone, then 
asking a random reviewer on IRC for a review on nits is an illusion if the nits 
are anything beyond "update the comment".

On top of that, the idea also taps into the concern I raised above. Cognitive 
load required for a reviewer to step into a bug, skim through all comments, 
patch history and latest review with request for nits to understand if the nits 
represent the original reviewer request is also non trivial.

The way it's presented in this thread feels like a utopian vision where anyone 
can just take a quick glance and stamp an r+, but in reality it'll either add 
significantly to the load of already overloaded group in our project, or become 
an illusion of security with people just accepting everything from people they 
know.

I'm actually concerned that in the era where most projects go in the direction 
of streamlining the development and reducing the bureaucracy as much as 
possible (post-landing reviews, peer-reviews etc.), we're talking about adding 
another hoop to jump through.

I'm all for increased security (2FA etc.), but unless there's an unspoken set 
of cases where security of our project has been compromised by a change in the 
patch that was added after r+, I'd like to question if we're really at the 
point where we need such tradeoff.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-09 Thread zbraniecki
As others stated, the idea that patch cannot be altered after r+ has a massive 
effect on productivity. I can't overstate how much it would impact day-to-day 
work for engineers, and I don't really see an easy way out.

Even if we added "approval to land with minor changes" there's a) no way to 
distinguish minor fro major, and b) reviewers will either start using it as a 
default, or keep forgetting about it.

I like the direction, but I honestly believe that this single idea would make 
working with Gecko a massive PITA.
With autoland my path to central from when I get all the required reviews is 
already ~24h because I push the "land" button around 2pm PST and it gets merged 
into central around 3am, so I can only follow-up the next day.

I recently introduced a regression not caught by me, my reviewer or tests. It 
wasn't major enough to warrant panic mode, but I'm sure it irritated people 
with spawned warnings and of course it has some impact on our nightly users.
I landed the follow up within 20 minutes of discovering the bug, but since it 
wen't through autoland, it took two nightly builds and a full day before users 
stopped reporting dups of the bug.

Now, if you add to that, that every minor change I make after my reviewer 
approved my patch I need to get a re-review (and most of my reviewers are in a 
different timezone), it'll basically at the very best add just another 24h to 
the cycle.
If it's Friday, or my reviewer is busy with other stuff or on PTO, it'll add a 
couple days.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread zbraniecki
Reporting first results.

We got an icecream setup in SF office and I was able to plug myself into it and 
got a icecc+ccache+gcc combo with a fresh debug build in <30 min.

On top of that, I had low load on my machine, which is nice as in the meantime 
I was able to work on other things.

Now, two things that are probably still limiting me are:
 * network, I did this over wifi. I'll get a usb->eth adapter and this should 
speed up things additionally
 * I still have only 8GB of ram which is probably the ultimate limiting factor

:bdhal says that he got his builds under 5min, which is close to the lower 
bound I guess.

Other notes:
 * I didn't test without ccache. It may also work better for me, I'll test it 
later
 * I failed to get icecc work with clang for some reason

Lastly, as much as this does help me, it doesn't help us lower the barrier for 
contributors not working from the office. They usually have less powerful 
machines, with less ram and no access to farms.
So any work we can do to split those central headers that make 2day rebuilds 
full rebuilds, would go along way at making the experience of contributing to 
Gecko better.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread zbraniecki
On Wednesday, March 8, 2017 at 8:57:57 AM UTC-8, James Graham wrote:
> On 08/03/17 14:21, Ehsan Akhgari wrote:

> At risk of stating the obvious, if you aren't touching C++ code (or 
> maybe jsm?), and aren't using any funky compile options, you should be 
> using an artifact build for best performance.

I am working 905 of my time in C++, I just do most of my work within a single, 
quite small, module (`/intl`) so my recompilation times after changes are good 
(30sec?).

But, every couple days, when I rebase on top of new master, is when I have the 
one hour recompile.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread zbraniecki
On Tuesday, March 7, 2017 at 3:24:33 PM UTC-8, Mike Hommey wrote:
> On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
> clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
> performance have not improved in 3 years), on Linux. 70 minutes is way
> too much.

Arch Linux.

Sometimes I'll get down to 40min, but often it's 60.

I'm going to try to remove ccache for the next rebuild and see how it affects 
things.

I may also have to request a new laptop although I was really hoping now to 
have to for at least another year...

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread zbraniecki
So,

I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS 
will help me here. "-j4" seems to be a bit high and a bit slowing down my work 
while the compilation is going on, but bearable.

I was just wondering if really two days of patches landing in Gecko should 
result in what seems like basically full rebuild.

A clean build takes 65-70, a rebuild after two days of patches takes 50-60min.

It seems like something is wrong and I'd expect such partial rebuilds to be 
actually quite fast, but somehow it seems that either ccache, or our build 
system can't narrow down things that require recompilation well?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Is there a way to improve partial compilation times?

2017-03-07 Thread zbraniecki
I'm on Linux (Arch), with ccache, and I work on mozilla-central, rebasing my 
bookmarks on top of central every couple days.

And every couple days the recompilation takes 50-65 minutes.

Here's my mozconfig:
▶ cat mozconfig 
mk_add_options MOZ_MAKE_FLAGS="-j4"
mk_add_options AUTOCLOBBER=1
ac_add_options --with-ccache=/usr/bin/ccache
ac_add_options --enable-optimize="-g -Og"
ac_add_options --enable-debug-symbols
ac_add_options --enable-debug

Here's my ccache:
▶ ccache -s
cache directory     /home/zbraniecki/.ccache
primary config  /home/zbraniecki/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
cache hit (direct) 23811
cache hit (preprocessed)3449
cache miss 25352
cache hit rate 51.81 %
called for link 2081
called for preprocessing 495
compile failed   388
preprocessor error   546
bad compiler arguments 8
autoconf compile/link   1242
no input file169
cleanups performed42
files in cache 36965
cache size  20.0 GB
max cache size  21.5 GB

And all I do is pull -u central, and `./mach build`.

Today I updated from Sunday, it's two days of changes, and my recompilation is 
taking 60 minutes already.

I'd like to hope that there's some bug in my configuration rather than the 
nature of things.

Would appreciate any leads,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to deprecate - command line paremeter "uiLocale"

2017-03-06 Thread zbraniecki
Deep in the dungeons of Mount nsChromeRegistryChrome[0], lies an ancient line 
of code that in the Days Of Old allowed knowledgeable spellcasters to select 
the locale of the user interface straight from the command line - "uiLocale"[1].

Since then, many things have changed, and folks forgot about this ancient 
magic, while at the same time added more and more ways to get the UI locale 
that did not take this command line parameter into account.

Today, we have 13+ consecutive, and slightly incompatible, ways of selecting 
the user requested locale[2]. 12 of them ignore uiLocale, which makes me 
believe that if someone would try to use it they'd end up with an unusable 
patchwork of locales.

Yours truly intends to craft one method to rule them all and in the 
mozilla::intl::LocaleService bind them, but he'd also like to avoid migrating 
that ancient command line spell and let it sail to Valinor.

Is there anyone still using it? Is there any reason to keep it?

Thanks,
zb.


[0] 
http://searchfox.org/mozilla-central/source/chrome/nsChromeRegistryChrome.cpp#42
[1] 
http://searchfox.org/mozilla-central/source/chrome/nsChromeRegistryChrome.cpp#339
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=135#c4
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing LocaleService and mozILocaleService

2017-02-07 Thread zbraniecki
On Tuesday, February 7, 2017 at 2:33:05 PM UTC-8, Gijs Kruitbosch wrote:
> Please can you add an alias into Services.jsm so that 
> Services.locale. works ?

Yeah, filed bug https://bugzilla.mozilla.org/show_bug.cgi?id=1337551

> 
> Also, that particular API looks like it might as well be a readonly 
> attribute, which allows more concise use from JS while looking the same 
> from C++:
> 
> const {appLocales} = Services.locale;

Good idea, I'll discuss it with :jfkthame.
 

> I'm confused. The reason I normally ask for the locale that's currently 
> in use from the chrome registry is because I want to know whether 
> strings are going to be available for the feature I'm working on (esp. 
> when relating to uplifts), based on a list I have with locales that have 
> strings.

And that's a valid use-case for which you will want to use the `GetAppLocale`.

There are other use cases which do benefit from the access to fallback chain, 
like Intl formatters, collators etc.

> Conversely, if we're going to start providing the OS locale (which might 
> not be reflected at all in the Firefox UI) as the first/top locale

We will not.

> Worse, what happens in a situation where:
> 1) I'm using my OS in French
> 2) I'm running en-US Firefox
> 3) I'm installing an add-on that ships with localization into French.

If your negotiated locale fallback chain for Firefox has 'en-US' as the first 
locale, and the addon has en-US translation we will use it.

So generally yes, we'll try to keep the UI translation consistent as much as we 
can across the whole product. But we may benefit from the ability to fallback 
on something better than last resort for some operations in edge cases where we 
do not have data for your first locale.

> Do I just misunderstand what the goal of this 
> API is or how it's supposed to work? Can you clarify?

Hope that helps!

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Introducing LocaleService and mozILocaleService

2017-02-07 Thread zbraniecki
Hi devs,

intl/locale is going through a refactor phase to get us ready for the new 
localization framework and the new l10n UX [0].

As part of it, we just landed a new LocaleService that is going to take over 
from nsLocaleService and nsChromeRegistry as the API for locale negotiation for 
the platform.

The bottom line is that starting today, we'd like to move all call-sites that 
are looking to use the current locale fallback chain of the application to use:

C++ [1]

```
nsTArray appLocales;
mozilla::intl::LocaleService::GetInstance()->GetAppLocales(appLocales);
```

JS [2]:

```
const localeService = Components.classes["@mozilla.org/intl/localeservice;1"]
  .getService(Components.interfaces.mozILocaleService);

const appLocales = localeService.getAppLocales();
```

If your code can handle only one locale there's a helper `GetAppLocale` which 
will retrieve just the top one, but in general, we want to move to APIs that 
take full fallback chain to better fallback in case the first locale is not 
available (that's how our l10n and intl code will work).

We started unifying all call-sites to the new API [3] and we'd love to get some 
help, so if you're maintaining any Gecko code that currently retrieves the 
current app locale, please help us by migrating your code.

And if you're working on new code, use only this service from now on as we'll 
try to deprecate the others.

Thanks,
zb.

p.s. The logic inside LocaleService will be maintained by our team and we'll be 
improving that to use preferences from the OS, and new Firefox Preferences UI 
in the future together with full language negotiation.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1325870
[1] http://searchfox.org/mozilla-central/source/intl/locale/LocaleService.h#43
[2] 
https://hg.mozilla.org/mozilla-central/file/tip/intl/locale/mozILocaleService.idl#l19
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1334772
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding Rust code to Gecko, now documented

2017-01-25 Thread zbraniecki
On Thursday, November 10, 2016 at 5:15:26 AM UTC-8, David Teller wrote:
> Ok. My usecase is the reimplementation of OS.File in Rust, which should
> be pretty straightforward and shave a few Mb of RAM and possibly a few
> seconds during some startups. The only difficulty is the actual JS
> binding. I believe that the only DOM object involved would be Promise,
> I'll see how tricky it is to handle with a combo of Rust and C++.

Did you ever get to do this? Is there a bug?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Wanted: APZ Scrollbar dragging

2017-01-25 Thread zbraniecki
Easily reproducible on Ubuntu 16.10, so cross platform.

Worth filling a bug?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-23 Thread zbraniecki
On Monday, January 23, 2017 at 12:03:35 PM UTC-8, Eric Shepherd wrote:
> It seems to me, anyway, that the ideal solution would be to enhance HTML 
> (ideally in the spec) with the features needed to build a full-fledged 
> desktop UI. That would be fabulous not just for Firefox making the transition 
> to defining its UI in HTML, but could potentially be adopted by other 
> projects and platforms that use JavaScript and HTML to build apps (such as 
> Electron).

This is, by the way, what we're doing with Intl.

We're replacing all of our Intl APIs with TC39 ECMA402 backed Intl APIs (driven 
by CLDR) and when we find a limitation, we introduced a `mozIntl` chrome-only 
API which serves us as a place where we close the gap while maintaining future 
compatibility and at the same time we use it to test ideas for future ECMA402 
extensions.

This model has been received really well by TC39 so far and keeps us on a path 
with a multi-win:

1) We're aligned with the spec work
2) We're championing a lot of the spec work
3) We have a reason to believe that eventually, all of mozIntl will end up in 
ECMA402
4) Moving things from mozIntl into Intl once it gets into spec is easy

I can totally imagine us doing it for HTML.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-18 Thread zbraniecki
Regarding choice of framework for HTML-backed UIs.

My initial suggestion is to try not to go into a fully-opinionated stack like 
React.

My opinion has nothing to do with React itself, it's quality or suitability, 
but with a generic approach of using an opinionated stack that diverges from 
vanilla javascript.

Sticking as close to bare metal as possible, will allow us to solve our needs 
by improving the Web stack, instead of improving a particular framework.

Over time, if we're successful, we will not only create Firefox UI in HTML 
stack, but we'll enable others to create UI on the level of complexity of 
Firefox one's using the same stack.

If we go for an opinionated framework, we'll sort of lock ourselves in their 
technology, irrelevant how good it is. If 5 years from now, React will not be 
the best solution, we'll have a major challenge to migrate away from it, but as 
:brendan likes to say "Always bet on JS" - JS will be here and using JS will 
likely be the right choice for our high level glue code.

I'd also prefer to develop dependency-free libraries that we can contribute to 
the web world, than react plugins that we would contribute to react community 
only.

For that reason, I'd suggest we try to evaluate what needs to we really have 
that we believe react could solve - is it about data bindings? routing? 
components? and consider trying to find a minimal library that will solve those.

For example, vue seems to be much lighter and less opinionated, while polymer 
seems to be sticking closer to the vanilla web stack increasing the chance that 
we'll be able to eventually reduce our reliance on any framework as the web 
stack progresses.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-17 Thread zbraniecki
One more thing that XUL gives us is L10n.

With HTML, we can use .properties to load localization resources and inject 
them into HTML, but I believe this to be a very inelegant solution with a 
surprisingly high risk of bugs.

We do have an l10n framework called L20n that is supposed to replace DTD and 
works in raw XUL and HTML binding elements to l10n messages with `data-l10n-id` 
attribute.

Our plan was to target post-quantum release to refactor the XUL code to switch 
from DTD to L20n, but we could also just introduce the new approach and use it 
for new code already, while waiting for post-quantum to transition the old code.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing GTK2 widget support?

2016-12-25 Thread zbraniecki
On Sunday, December 25, 2016 at 6:36:37 PM UTC-8, Mike Hommey wrote:
> XP_GNOME comes to a surprise to me. But then it's only used in one
> place, and defined in the same place, so it's a local thing...

Yea, it just so happens that it's browser.xul ;D

But correct, it is a local thing.

> As for MOZ_WIDGET_GTK vs. MOZ_WIDGET_TOOLKIT, there are 2 essentially
> because you can't do things like "#if MOZ_WIDGET_TOOLKIT == foo" in C++
> code, although, come to think of it, we could have a MOZ_WIDGET(foo)
> macro...
> 
> Also note that generally speaking, there is a difference between the
> platform (e.g. XP_WIN, XP_LINUX, etc.) and the widget
> (MOZ_WIDGET_TOOLKIT). For example, while probably not true anymore, you
> could build with the Gtk toolkit for Mac. So, generally speaking, there
> is a need to differentiate both.

That's a great point.

If we could unify around two variables:
 - platform (win, macos, lin, etc.)
 - toolkit (gtk, cocoa, win, etc.)

that would help me expose those two for localizers. Although quite honestly, 
I'd love to expose only one to localizers to differentiate strings upon.

Probably toolkit is the right one for front-end strings.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing GTK2 widget support?

2016-12-25 Thread zbraniecki
While preparing for the transition to the new localization framework, I 
noticed[0] that we use a large number of loosely overlapping build-time 
variables to indicate different combinations of widgets, platforms and GUIs.

It would be awesome if we could bring some consistency to that. In particular, 
I'd appreciate if we could decide if we want to go for XP_GNOME, or 
MOZ_WIDGET_GTK=2|3 or MOZ_WIDGET_TOOLKIT=gtk2|gtk3.

Many of those variables, which currently are used to separate per-platform 
strings, will be replaced with runtime l10n functions, but it would be nice to 
have it cleaned up so that localizers can only decide on the variant depending 
on one variable (like, platform=win|lin|mac) vs. multiple.

So, I vote yes, but I'd also like to ask whoever will be implementing it to 
consider unifying the build-time variables used to special-case gnome-related 
UI code.

Thanks,
zb.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1311666
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-26 Thread zbraniecki
So, it seems to me that we're talking about two aspects of module loading:


1) Singleton vs. per-instance

Cu.import allows us to share a single object between all the code that 
references it.

ES6 modules are not meant to do that.

2) Conditional vs. static

Cu.import allows us to decide *when* we're loading the code for side-effects, 
or even *if* we're going to load it at all.

if (needed) {
  Cu.import(...);
}

or

XPCOMUtils.defineLazyModuleGetter(this, 'Services',
  'resource://gre/modules/Services.jsm');

-

The latter one may be resolved by some future ECMA proposals like:
 - https://github.com/domenic/proposal-import-function
 - https://github.com/benjamn/reify/blob/master/PROPOSAL.md

The former is a more tricky. I'm not sure how can we, within statement import 
world annotate the difference.
In the import-function world we could maybe do:

import('resource://gre/modules/Services.jsm', {singleton: true}).then();

but for static I don't see a semantically compatible way to annotate singleton 
reference.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-25 Thread zbraniecki
If I understand correctly es module imports work differently from  JSM's 
Cu.import in which scope they operate in.

JSM's are singletons, while importing in es imports code into your JS context.

How would you want to differentiate between those two modes?

Other things in the es import is that is that it cannot be loaded 
conditionally, while Cu.import can (that may be solved in the future ES 
import() proposal)

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: tpaint test is testing more than just window opening and is regressing with each run

2016-09-09 Thread zbraniecki
Hi,

We're working on a rewrite of the localization infrastructure in Gecko and 
tpaint is the main perf test that we're affecting.

My team has put a lot of work over last weeks to better understand tpaint tests.

The test at the moment is both noisy, and has a steady regression with each run 
which makes it hard to feel confident about it and also makes it impossible to 
just increase the number of runs in order to increase significance and 
reliability of the result.

Basically, tpaint does 20 runs and currently each run is slower than the 
previous one. I believe it may affect your experience of using the test.

What's more, the test doesn't only test opening a new window, but it also loads 
a document into it (via data: protocol) and measures first paint after the 
document is loaded.

We found out that the regression is visible in tpaint (window.open + document 
load), but not in MozAfterPaint of the browser chrome UI, so your patch may 
have an impact on the tpaint *without* affecting the new window performance.

I'm documenting the findings in 
https://bugzilla.mozilla.org/show_bug.cgi?id=1295292

Seems like :jimm can't commit to it right now, so if anyone else knows how to 
fix it and has cycles, it might help us make better decisions about our code.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mach and ccache failure?

2016-08-19 Thread zbraniecki
On Friday, August 19, 2016 at 4:12:12 PM UTC-7, Xidorn Quan wrote:
> On Sat, Aug 20, 2016, at 08:45 AM, zbranie...@mozilla.com wrote:
> > Both builds take around 43-46 minutes, with ccache hit rate 0.8-1.0%.
> > 
> > This is the same source - mozilla-central from today.
> > 
> > What am I doing wrong?
> 
> Probably your ccache cache is too small? What size did you set?
> 
> - Xidorn

It fills the ccache:

cache directory /home/zbraniecki/.ccache
primary config  /home/zbraniecki/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
cache hit (direct) 39320
cache hit (preprocessed)2983
cache miss 43686
called for link 2809
called for preprocessing2156
compile failed   512
preprocessor error   914
bad compiler arguments   121
unsupported source language  191
autoconf compile/link   3074
no input file  21162
files in cache 21388
cache size   8.9 GB
max cache size  10.0 GB

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mercurial performance (vs git)

2016-08-15 Thread zbraniecki
On Monday, August 15, 2016 at 1:12:51 PM UTC-7, Matthew N. wrote:
> Make sure you have enabled the fsmonitor[1] extension for mercurial if 
> your prompt is using `hg` commands. I believe `mach mercurial-setup` now 
> helps with this.

Ugh, that helps as hell!

I installed watchman and turned on fsmonitor and the prompt for hg went down 
from 1.7s to 0.2s!

Thanks a lot!

zb.
p.s. I'll stick to vcprompt just because it doesn't require that much .sh 
scripting :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Mercurial performance (vs git)

2016-08-15 Thread zbraniecki
For the last few months I've been mostly using git clone of mozilla-central 
because I'm used to git. Now I'm trying to set up my mercurial environment to 
match what I have for git in order to reduce the bias toward the latter.

One of the crucial parts of my workflow is the git completion shell prompt that 
gives me information about branch I'm on and untracked/modified files.

This is how my shell prompt looks like on gecko-dev (git clone):

zbraniecki@cintra:~/projects/mozilla/gecko-dev (master %=)$

and if I modify any file it may look like this:

zbraniecki@cintra:~/projects/mozilla/gecko-dev (master +%>)$

I tried to get something similar for HG, including hg-prompt (written in 
python), and vcsprompt (written in C), but both are painfully slow.

What's striking, on the same repo, the git is 3 times faster than hg to get me 
the prompt shell.

zbraniecki@cintra:~/projects/mozilla/gecko-dev (master %=)$ time vcprompt -f "( 
%b %u%%%m)"
( master ?%)
real0m0.472s
user0m0.236s
sys 0m0.384s

vs

zbraniecki@cintra:~/projects/mozilla/mozilla-central$ time vcprompt -f "( %b 
%u%%%m)"
( default %+)
real0m1.643s
user0m1.224s
sys 0m0.396s


I thought that maybe it's just vcprompt, so I tried status:

zbraniecki@cintra:~/projects/mozilla/mozilla-central$ time hg status

real0m1.706s
user0m1.380s
sys 0m0.316s

vs.

zbraniecki@cintra:~/projects/mozilla/gecko-dev (master %=)$ time git status
On branch master
Your branch is up-to-date with 'origin/master'.

real0m0.399s
user0m0.204s
sys 0m0.332s

If I understand correctly our choice of using mercurial over git was driven by 
the performance. Am I doing something wrong?

It seems like the performance difference is quite substantial.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to measure performance impact of l10n on UI

2016-07-24 Thread zbraniecki
One hour of reading DXR later and I *think* I want to get the timestamp of 
this: 
https://dxr.mozilla.org/mozilla-central/source/layout/base/nsPresShell.cpp#3809

or something around it :)


as this will tell me a couple of things:

1) Things injected into DOM after this timestamp may/will cause reflow.
2) Things injected into DOM before this timestamp are unlikely to cause FOUC
3) If I change any code in ContentSink, HTMLParser, or if I'll inject a 
MutationObserver that will be catching nodes as the Parser feeds the DOM and 
modifying them, I should see this timestamp being affected and in result 
performance being impacted.

Does it sound like what I'm looking for?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to measure performance impact of l10n on UI

2016-07-23 Thread zbraniecki
On Friday, July 22, 2016 at 6:53:45 AM UTC-7, Mike Conley wrote: 
> As for MozAfterPaint firing all over the place - you might find this
> useful:
> https://groups.google.com/forum/#!searchin/mozilla.dev.platform/MozAfterPaint/mozilla.dev.platform/pCLwWdYc_GY/j9A-vWm3AgAJ
> 
> See the second example I wrote in
> https://developer.mozilla.org/en-US/docs/Web/Events/MozAfterPaint#Example

Follow-up question.

I started using the transaction ID in my tests and am getting this weird result:

1)  24.00: DOM Loading 
1)  65.50: Paint with transaction ID 38
2)  73.00: DOM Interactive (winUtils.lastTransactionId == 39)
3)  74.00: DOMContentLoadedEventStart
4)  94.00: DOMComplete
5) 162.00: loadEventEnd
6) 196.97: Paint with transaction ID 39
7) 271.94: Paint with transaction ID 40

And this order happens relatively often. What's surprising is that the 
winUtils.lastTransactionId read done at readyState=='interactive' is 39.

>From what you said in the linked post, the transaction ID 39 is the one that 
>has been sent to the compositor *before* document.readyState changed to 
>'interactive'.

That means that this transaction did not contain DOM from HTML and will not 
result in a layout yet, so I'm hunting for the *next* transaction after it, 
which has ID 40.

But the next paint happens at 196.97 and it has the transaction ID 39, and then 
finally the paint with transacation ID 40 happens at 271.94

Should I take the one with ID 39 or 40 as "the first paint of the document"?

zb.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to measure performance impact of l10n on UI

2016-07-22 Thread zbraniecki
On Friday, July 22, 2016 at 6:53:45 AM UTC-7, Mike Conley wrote:
> Is the firstPaint timestamp in nsIAppStartup's getStartupInfo[1] not
> sufficient? It's apparently recording the time that firstPaint occurred.
> I think you're already aware of this, but I'm in the process of
> modifying ts_paint to record the delta between the firstPaint timestamp
> and the process start timestamp in bug 1287938.
> 
> If it's not sufficient, I'd like to understand why.

If I understand correctly, firstPaint from getStartupInfo will tell me when the 
first paint of the window occured.

But since I'm operating in a document (I'm working on about:support document), 
I'm looking for the firstPaint of the document, not the whole browser window.

So, what I'm looking for is something like "performance.timing.firstPaint" for 
each document.

Am I missing something?


> 
> As for MozAfterPaint firing all over the place - you might find this
> useful:
> https://groups.google.com/forum/#!searchin/mozilla.dev.platform/MozAfterPaint/mozilla.dev.platform/pCLwWdYc_GY/j9A-vWm3AgAJ
> 
> See the second example I wrote in
> https://developer.mozilla.org/en-US/docs/Web/Events/MozAfterPaint#Example
> 
> Is any of that helpful?

That seems helpful!

If I understand correctly, I can take the transaction Id at DOMContentLoaded 
(or DOMInteractive?) and assume that the first paint with the transaction ID 
higher than that is the paint that flashed the document.

Then, if my code modifies DOM after that paint, I will reflow/flash.

Is that a correct assumption?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How to measure performance impact of l10n on UI

2016-07-21 Thread zbraniecki
As part of the work we're doing to replace old l10n API (DTD for HTML/XUL and 
StringBundle for JS) with new API, we're trying to measure the performance cost 
of DTD, StringBundle and its replacements.

The challenge we encountered is that there doesn't seem to be a way to measure 
something that I'd intuitively and naively would call "first paint".

By first paint, I mean the moment when the engine paints the UI for the first 
time.

I'd expect there to be some way to get it via Performance API, or some 
Mozilla-specific event, but everything I see does not seem to do this.

MozAfterPaint reports every paint, and it fires before DOMContentLoaded, 
between DOMContentLoaded and window.onload, and after. It's impossible to say, 
which one of them marks the event I'm after*.

bz created a POC of an API for us that pauses frame creation (**) and that's 
awesome as it ensures that we will not cause FOUCs, but now we need to measure 
when the "first paint" happens with our code vs. with DTD and I don't know how 
to get the required event.
There seems to be `widget-first-paint` event (***) but if I understand it 
correctly it'll only mark when chrome window is painted for the first time, now 
a document.

Can someone help us? If we have to add it, where?

Thanks,
zb.


*) And it's not the first. I can reliably modify visible DOM after first 
MozAfterPaint and I will not have FOUC.
**) https://bugzilla.mozilla.org/show_bug.cgi?id=1280260
***) 
https://dxr.mozilla.org/mozilla-central/source/layout/base/nsPresShell.cpp#9157
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Questions about bindings for L20n

2016-06-15 Thread zbraniecki
On Tuesday, June 14, 2016 at 11:51:16 AM UTC+1, Joe Walker wrote:
> I don't think you can say "It's sync unless  in which case it's
> async".
> If that's that case then from the API consumers point of view, then (deep
> voodoo withstanding) it's async.

As weird as it sounds, I believe that you actually can in this case.

Because the API is declarative, we can translate DOM synchronously and if we 
encounter an error, we can either synchronously or asynchronously get the 
fallback.

Which means that we're only dealing with async (potentially) when we hit an 
error scenario.

Dealing with worse performance in error scenarios is still significantly better 
than the current situation where we just crash.

And as Axel pointed out, we can do the error scenario sync or async, depending 
on our decisions that don't affect our architecture.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Questions about bindings for L20n

2016-06-13 Thread zbraniecki
On Monday, June 13, 2016 at 9:39:32 AM UTC+1, Gijs Kruitbosch wrote:
 > Separately, the documentation put forth so far seems to indicate that 
> the localization itself is also async, on top of the asyncness of the 
> mutationobserver approach, and that could potentially result in flashes 
> of unlocalized content, depending on "how" asynchronous that API really 
> ends up being. (AFAIK, if the API returned an already-resolved promise, 
> there might be less chance of that than if it actually went off and did 
> IO off-main-thread, then came back with some results.)

The DOM localization that is used in response to MutationObserver is sync.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Questions about bindings for L20n

2016-06-10 Thread zbraniecki
On Friday, June 10, 2016 at 10:37:04 AM UTC-7, Gijs Kruitbosch wrote:

> This async-ness will not be acceptable in all circumstances. As a 
> somewhat random example: how would we localize the 'slow script' dialog, 
> for which we have to pause script and then show the dialog? 

Agree, there are exceptions, and we may have to provide sync version (which 
will have limited functionality) for such cases.

For this particular case, the way we approached it in FxOS was to do something 
like replacing:

window.alert(l10n.get("slowScriptTitle"));

with:

l10n.formatValue("slowScriptTitle").then(val => window.alert(val));


Would that not work?

> Another example: in docshell, some error page URLs are currently generated 
> synchronously in some circumstances (invalid host/uris, for instance). 
> Making such a location change asynchronous just because of localization 
> is going to break a *lot* of assumptions, not to mention require 
> rewriting a bunch of yucky docshell code that will then probably break 
> some more assumptions...

Yay! :)

If you're saying that we're generating URLs with localized messages in them, 
then I'd question to design...

But as I said, we may have to provide a compatibility layer where we'll have 
sync variant for those scenarios and discourage it for new code.

> It's much easier to just say "we'll make 
> everything async" when you have a greenfield project like b2g than to 
> retrospectively jam it into 20 years of history (ie Gecko).

It probably is, but you don't want to know how much time it took me to 
transition even the relatively young project from sync to async! ;)
 
> Not all JS and C++ code that will want to localize things has access to 
> a document object, and for all consumers to have to create one just to 
> use localization features would be cumbersome (and, as I understand it, 
> would not work without also inserting all the stringbundle things you'd 
> need). Please can we make sure that we have a pure-JS/C++ API that is 
> usable without having to have a document? (Currently, you can create 
> nsIStringBundle instances via XPCOM, and PluralForm can be used as a jsm 
> but not from C++, which also already causes headaches.)

We'll definitely have pure JS code. We're going to land JSM code, and as I 
said, Intl stuff (like PluralRules) will be available straight from 
SpiderMonkey (Intl.PluralRules). Although in L20n world, as an engineer you 
won't ever need to use PluralRules manually :)

For C++, we may wrap the JS API and expose it in C++, but we may also try to 
migrate l10n in C++ up the layer and make C++ code carry l10nIds, and JS UI 
code localize them.

> I'm quite worried some of this won't be workable. For instance, XUL 
> panels make decisions about how big they need to be based on their 
> contents. We'll need to ensure that the content in such panels is 
> present and localized before attempting to show the panel. We can't just 
> add the attributes, show the panel, and hope for the best. If we insert 
> extra turns of the event loop in here because we're ending up waiting 
> for localization, that'll make it harder to deal with state changes (I 
> clicked this button twice, is the popup open or closed? etc. etc.)

That's a great point. As I said in my previous email I'd love a way to prevent 
frame creation until JS init code is done.

We may also decide to move the MutationObserver part in Gecko to ContentSink, 
or design an API that we'll plug into our DOM that will work better for us than 
Mutation Observer.

So far MO works well and gives us the results we need.

> This is still problematic in terms of markup though. It's not uncommon 
> to have 3 or more DTDs in a file, and I can just use an entity without 
> asking what bundle it's from. Having to specify it for any "non-main" 
> bundle would be problematic. Why can't we just fall back to using the 
> other available bundles?

By default you will have all your "DTD"s in the "main" bundle, and we'll loop 
over them to localize your elements. So that works the way you expect it.

On top of that, you'll be able to also specify more bundles with more source 
files. That's where named ones come in.

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Questions about bindings for L20n

2016-06-10 Thread zbraniecki
Hi Gijs,

On Friday, June 10, 2016 at 2:49:16 AM UTC-7, Gijs Kruitbosch wrote:

> Mutation observers or mutation events? How do you decide which elements 
> you observe? Observing the entire DOM tree seems like it'd likely be 
> terrible for performance once we start mutating the DOM. Have you done 
> any measurements on the performance of this approach when large amounts 
> of DOM are inserted (ie not about:support :-) )? How do you decide on 
> which documents you add these observers to?

We're using Mutation Observers, and we haven't observed (no punt intended) any 
performance impact yet. We've been using them on the slowest devices that FxOS 
has been designed for, and they performed surprisingly well.

While working on with Mutation Observers I tried to evaluate the potential to 
optimize them to increase the signal/noise ratio of callbacks, and talked to 
people like Olly and Anne about potential improvements that would work better 
for our use case [0].

The general response to my questions was -
a) Seems like Microsoft's NodeWatch proposal [1]
b) They asked us to show them an example of where the current API is slow for 
our use case and they'll help us develop a better one.

So far we failed to find a case where MutationObserver would have a noticable 
negative impact on performance.

Would you by any chance know any piece of Firefox which does large amounts of 
DOM insertions that we could test against?

 
> MutationObservers are async, and dtd localization in XHTML is currently 
> synchronous on parsing. That seems like a large change that will cause a 
> lot of problems relating to reflow / flashes of unlocalized content 
> (keep in mind we use l10n data for style as well)


Correct. It's a major change.

Similarly to performance concerns, FOUCs are on our mind, and we've been 
working on this technology initially targeting very slow devices. We've been 
able to get no-FOUC experience so far, but we know it's not deterministic.

We're in a position similar to many other developers who want to use JS to 
alter DOM before frame creation and layout happen. [2]

> , tests that expect synchronous changes as a result of actions

We'll have to fix the tests. Yes.

> , as well as issues where we would want the localized changes in elements 
> that aren't in the page DOM (so constructed in JS, but not included in the 
> DOM (yet)).

That's actually fairly well solved in our approach. By default localization 
happens only when you inject your DOMFragment into DOM, but you can also 
manually fire "translateFragment" which will do this on a disconnected fragment.

> You don't  mention a JS/C++ API, which we need for e.g. strings we pass to 
> message  boxes or into the argument strings for 
> about:neterror/about:certerror. 
> What are your plans in that department?

Two fold.

First of all, we are planning a pure JS API. In fact, we have Node as our 
target, which obviously doesn't use any DOM.

The API is not finalized, but it'll allow you to do the same thing you do in 
DOM from JS:

var bundle = new LocalizationBundle([
  'path/to/source1',
  'path/to/source2'
]);

bundle.formatValue('entityId').then(val => console.log(val));

On top of that we'll probably provide some synchronous way to get the value, if 
only for the compatibility mode, but we'll actively discourage using it, and 
using it will make the code not benefit from the features of the framework.

Secondly, we'll be advocating people to move the localization to the front-end 
of their code. Except of a few cases, there's no reason to localize a message 
deep in your code, and carry a translated string around, while instead the 
entityId should be carried around and resolved only in the UI.
 
> Less markup is better, so please don't wrap in more custom elements.

So, you're saying that:

 // implcit bundle 'main'
 // implicit bundle 'main'



is preferred over:


  
  


  
  


?


> It's not clear to me why we need a key/value object rather than a 
> sequence as we use now. Perhaps just a semicolon-separated string with 
> \; as an escape for literal ; ? That'd certainly be easier to read/write.

semicolon-separated string would be flat. Stringified JSON allows us to build 
deeper structures.

We provide a wrapper API to facilitate that:

document.l10n.setAttributes(element, 'l10nId', {
  user: {
'name': "John",
'gender': "male"
  }
});

will assign data-l10n-id and data-l10n-args to the element, while

const {
  l10nId,
  l10nArgs
} = document.l10n.getAttributes(element);

handles the reverse.

> Otherwise, it also seems wrong to require the bundle name 
> (data-l10n-bundle) on every localized element. The observer should be 
> able to simply iterate through the stringbundles in declaration order 
> until it finds a matching symbol.

It will iterate over sources in a single l10n-bundle.
In most cases, you will only have one l10n-bundle per document, so no need to 
explicitly name it or refer to it.

If you want a separate another 

Questions about bindings for L20n

2016-06-10 Thread zbraniecki
While working on the new localization API (See Intent to Implement post from 
yesterday), we're developing bindings into UI languages used by Firefox and we 
have some decisions to make that could be better answered by this group.

The general API is declarative and DOM-based.  Instead of forcing developers to 
programmatically create string bundles, request raw strings from them and 
manually interpolate variables, L20n uses a Mutation Observer which is notified 
about changes to data-l10n-* attributes.  The complexity of the language 
negotiation, resource loading, error fallback and string interpolation is 
hidden in the mutation handler.  Most of our questions in this email relate to 
what the best way to declare resources is.


1) HTML API

Our HTML API has to allow us to create a set of localization bundle objects, 
each with a unique name, that aggregate a set of localization sources.  It also 
has to allow us to annotate elements with L10n ID/Args pairs and potentially 
with L10n Bundle reference id.

Currently, our proposal looks like this:


  





  
  

  


Resource URIs are identifiers resolved by a localization registry which -- 
similar to the chrome registry -- knows which languages are available in the 
current build and optionally knows about other locations to check for resources 
(other Gecko packages, langpacks, remote services etc.). Localization bundles 
can query the registry multiple times to get alternative versions of a 
resource, a feature which makes it possible to provide a runtime fallback 
mechanism for missing or broken translations.

We're considering allowing names to be omitted which would imply the "default" 
bundle to reduce the noise for scenarios where only a single l10n bundle is 
needed.  There's also a document.l10n collection which stores all localization 
bundles by name, manages the Mutation Observer and listens to languagechange 
events.

The open questions are:

 * Would it be better to instead use custom elements like  
 ?
 * Are data-l10n-* for attributes OK?
 * Is there a better way to store arguments than stringified JSON?  We 
considered storing arguments as separate attributes (e.g. 
data-l10n-arg-user="John") but that would make it impossible to the Mutation 
Observer to know what to observe.
 * Any other feedback on the design?



2) XUL API

For XUL, we would like to use custom elements for bundles which are bound by 
XBL. The binding looks for  elements and creates a localization bundle 
object which is also available via the document.l10n collection.


  


  
   object?
 * Is it okay to use data-l10n-* attributes for localizable elements? Or 
perhaps l10n-* would be sufficient?



3) XBL API

For XBL, we plan to use the same XUL bindings but inside of the anonymous 
content.  Again, this creates a localization bundle object which is available 
via the document.l10n collection.


  

  
  


Open questions:

 * We understand that this creates and destroys the element each time the 
parent is bound/unbound. Is there UI that does that on a timing-sensitive path 
extensively? That'd be good to measure.

 * Mutations inside of the anonymous content are caught be the document.l10n's 
observer;  are there plans to unify this with how mutations are handled in 
shadow DOM where observers observing non-anonymous content aren't notified 
about mutations in the anonymous content?



4) Performance measuring

We need to evaluate the performance impact of the change. So far we've been 
able to measure the loading time of about:support with DTD/StringBundle vs L20n 
using the  Performance Timing API and the results are promising (perf win!), 
but we don't know how representative it is for Firefox startup and memory.

Question: Which performance tests should we run to ensure that L20n is indeed 
not regressing performance of Firefox?


That's it for now. We appreciate your feedback and comments!
Your L10n Team
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Implement: New L10n/I18n framework for Gecko

2016-06-08 Thread zbraniecki
Summary: 

Gecko's localization framework has not really changed much in the last 20 years 
and is based on two data formats - DTD and .properties - neither have been 
designed for localization purposes.
Our internationalization status is a combination of DIY helper functions, ICU 
based APIs and ancient code written for Netscape.
The current situation results in high maintenance burden, lower translation 
quality and keeps us locked out from the ability to develop new localization 
related features.

Over last three years we created a modern localization and internationalization 
infrastructure that we deployed for Firefox OS and put on ECMA standardization 
path. Now we intent to integrate it into Gecko, and migrate Firefox to it.

We're going to host a session about this project in London, on Friday at 13:15 
- 
https://mozillalondonallhands2016.sched.org/event/79As/the-future-of-l10ni18n-in-firefox-and-gecko

The two pillars are:

1) Localization

For l10n we intend to base our API on L20n API designed by Axel Hecht, Stas 
Malolepszy and me and use the newly designed L20n syntax, which is based on 
ICU's Message Format.
A single localization format will lower the technical burden and lower the 
complexity of our ecosystem. 
A new API will also result in cleaner and easier to maintain code base 
improving quality and security of our products. The new API will provide a 
resilient runtime fallback, loosening the ties between code and localizations. 
That will empower more experiments on shipping code and shipping localizations.

2) Internationalization

For i18n we intend to leverage our current design plan for ECMA 402 (JS I18n 
spec) and deploy the spec proposals that originally came from FxOS 
requirements. This will allow us to unify our I18n architecture, reduce code 
redundancy and end up with Gecko's I18n being the same as JS I18n.


The new infrastructure has been designed to work together - L20n ties perfectly 
into I18n formatters, while parts of L20n API and syntax may end up becoming 
Web Localization standards proposal.

Our goals are to significantly improve our ability to create high quality 
multilingual user interfaces, simplify the l10n API for developers, improve 
error recovery and enable us to innovate.

The first area of innovation that we're planning to build on top of the new 
infrastructure are "Live Updates" - a technology that will allow us to pull 
localization resources independently from code base updates enabling scenarios 
like partial translation releases where the localization is added within a 
couple days after the product is available.

Bug: Meta bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1279002

Current POC: https://github.com/zbraniecki/gecko-dev/tree/l20n

Link to standards:

Intl:
*) https://tc39.github.io/ecma402/
*) https://github.com/tc39/ecma402#current-proposals
   - Intl.RelativeTimeFormat ( 
https://github.com/zbraniecki/intl-relative-time-spec )
   - Intl.PluralRules ( https://github.com/tc39/proposal-intl-plural-rules )
   - Intl.UnitFormat ( https://github.com/zbraniecki/proposal-intl-unit-format 
) 
   - Intl.ListFormat ( https://github.com/zbraniecki/proposal-intl-list-format )

L10n:
*) http://l20n.org/
*) https://github.com/l20n/l20n.js
*) (icu-design proposal) https://sourceforge.net/p/icu/mailman/message/35027629/

Platform coverage:

Our initial plan is to migrate Firefox to the new architecture.
In the future we'll look into enabling it for Web Extensions and other platform 
targets.
As we progress with standardization of the I18n and L10n APIs through ECMA 402 
we will expose those APIs to the public.

Estimated or target release: 

At this point we do not have a clear visibility into which release we will 
target. We plan to enable the APIs gradually, starting with L20n JSM module and 
HTML/XUL bindings. We would like to start landing the first batch of patches 
over the next month.

Do other browser engines implement this?

Other vendors are working with us to standardize I18n APIs through TC39 working 
group. We plan to standardize most of the new formatters in 4th edition of ECMA 
402 and we expect other vendors to implement it then.
L10n API is less mature and we expect to work with ICU, W3C and TC39 to come up 
with pieces of API that we will be able to push for standardization.

Please, share any feedback and come to our session in London!

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-05-04 Thread zbraniecki
Hi David,

I'm one of the editors of ECMA 402 and a champion of multiple proposals there. 
I'd like to respond to your comment:

On Saturday, April 30, 2016 at 1:26:53 PM UTC-7, David Baron wrote:
> I still find it sad that ECMAScript Intl came (as I understand it)
> very close to just standardizing on a piece of software (ICU), and
> also find it disturbing that we're going to extend that
> standardization on a particular piece of software (possibly even
> more rigidly) into other areas.  I think many of the arguments we
> made against standardizing on SQLite seem to apply to ICU as well,
> such as security risk and needing to reverse-engineer when writing
> future implementations of parts of the Web platform.

I disagree with this statement. While we are definitely looking into ICU APIs 
as one of the prior knowledge cases, we don't necessarily follow ICU. We design 
APIs based on what we see people request the most.
In some cases, we align our API with ICU because we believe ICU got it right 
(DateTimeFormat), in others we go with our own API (PluralRules, UnitFormat, 
DurationFormat) and in yet others, we standardize something that ICU wants to 
pick from us (NumberFormat.formatToParts).

What's important here, is that we deliberately write the spec to not depend on 
ICU and our reference implementation (Intl.js) is pure JS with no dependency on 
ICU so we are fairly certain that you don't need ICU to implement JS Intl API.

What we do try to standardize around is CLDR as a kind of "wikipedia for i18n 
data". CLDR is just a database and having all major companies contribute to it 
makes it very powerful in giving us access to all the data we may need for 
internal and JS Intl API needs.

> 
> While I expect that some of the features that Intl provides (from
> ICU data) are worthwhile in terms of codesize, I'm certainly not
> confident that they all are.  I have similar worries about other
> large chunks of code that land in our tree...
> 
> And when I say worthwhile, I'm talking not just about whether the
> feature is intrinsically valuable, but whether it's actually going
> to be used by Web developers to get that value to users.

We create APIs based on user needs. The way we determine what should stay in 
user land and what is worth standardizing is of course subjective, but we try 
to aim for lowering the barrier to write good multi-lingual applications so, 
obviously, we prioritize what's commonly used.

> How much value does ICU get from dropping Windows XP support?  Can
> we push back on their plans to do so, at least for the parts that we
> use?  (It also seems to be that we need to answer the question,
> already raised in this thread, about whether the parts that are
> expensive for them to support intersect at all with the parts that
> we use.)

Unfortunately, it seems that ICU decided to drop Win XP support in ICU 58. 
Maybe we can provide them strong reasons for not doing that?

We're currently starting an effort to deploy new L10n/I18n infrastructure for 
Firefox. While working on some of our most common needs (PluralRules, 
RelativeTimeFormat, UnitFormat), we reported bugs in CLDR and they are being 
fixed in time for CLDR 30.
So while we may not need to update ICU for a while and could potentially get 
stuck on ICU 57 (I don't have enough knowledge to understand what might be the 
cost of that), I'd like to make sure we can move forward with updating CLDR in 
Gecko.

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Allowing web apps to delay layout/rendering on startup

2015-10-10 Thread zbraniecki
On Saturday, October 10, 2015 at 4:28:30 AM UTC-7, Mounir Lamouri wrote:
> On Sat, 10 Oct 2015, at 02:02, zbranie...@mozilla.com wrote:
> > On Friday, October 9, 2015 at 10:51:54 AM UTC-7, Mounir Lamouri wrote:
> > > As far as speed feeling goes, they would win to show something as soon
> > > as possible and handle any post-first paint loading themselves.
> > 
> > That is unfortunately not consistent with my experience. People tend to
> > perceive visible progressive as much slower than delayed first-paint in
> > most scenarios.
> > 
> > On top of that, it is perceived as a really_bad_ux.
> 
> I don't think I agree but I don't mean to discuss Firefox OS product
> decisions.

I don't think it's specific to Firefox OS. FOUC is a pretty well researched 
problem and we have spent a lot of time designing standards to limit the risk 
of them happening.

The problem is that all we take into account when building anti-FOUC heuristics 
is HTML+CSS, while in the modern Web Apps, JS is part of the bootstrap process.

Of course a lot of things still apply, and we still aim at minimizing the 
amount of JS necessary during bootstrap but the state of art is that we do need 
and we will need some minimal JS executed before firstPaint.

One other reason it is necessary is that compared to web pages, there is no 
static content in HTML. Hell, there is no content at all.
What is the value for the user to see Music index.html file [0] before JS kicks 
in?
Literally zero.

And if JS breaks? The app will not work. At all.

While with web pages you can salvage and aim at displaying the content, even 
without JS, without CSS, because the goal is an article, with Web App, the goal 
is the "Play" button, the "New SMS" button etc. and those will not work without 
JS.

So the first valuable thing to see is after HTML+JS provides chrome of the app 
and CSS provides styling.

And it does not work well if this chrome flashes as it relayouts while JS 
finalizes it after firstPaint.

As Vivien said, yeah, of course we want to show something as soon as possible, 
minimal useful Chrome and let above-the-fold content load later - that's 
precisely how we designed our responsive guidelines [1], and we're 
incentivizing developers to make bring minimal UI early.

But that still requires JS. And currently we don't take it into account when 
discussing FOUCs, so platform races to paint while JS races to finish preparing 
chrome.

The result is two fold - one are FOUCs if we are unsuccessful. The other, if we 
are successful, is bad code. That's where we get synchronous  tag 
injected at the end of  to prevent FOUCs. That's where we get 
"document.write", and synchronous XHRs. All there to win the race with Gecko.

I see a huge value for the quality of the Web App Stack in removing the whole 
notion of this race. And Vivien's proposal seems to do just that with minimal 
invasion.

zb.

[0] https://github.com/mozilla-b2g/gaia/blob/master/apps/music/index.html
[1]https://developer.mozilla.org/en-US/Apps/Build/Performance/Firefox_OS_app_responsiveness_guidelines
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Allowing web apps to delay layout/rendering on startup

2015-10-09 Thread zbraniecki
On Friday, October 9, 2015 at 10:51:54 AM UTC-7, Mounir Lamouri wrote:
> As far as speed feeling goes, they would win to show something as soon
> as possible and handle any post-first paint loading themselves.

That is unfortunately not consistent with my experience. People tend to 
perceive visible progressive as much slower than delayed first-paint in most 
scenarios.

On top of that, it is perceived as a really_bad_ux.

That means that while Gecko is trying to do what you said - paint as soon as 
possible and handle everything later, Firefox OS apps are trying to to exactly 
the opposite - squeeze as much startup JS logic as possible before firstPaint.

Because they cannot control it, it is a condition race between two heuristics 
which includes lots of dirty tricks, unfair punches and other nasty, nasty 
stuff.

What we are trying to get is control over firstPaint for apps that want to 
control when first paint is executed. That would remove the condition race and 
actually free Gecko from the burden of trying to analyze when to start painting.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Read only non-privileges access to mozSettings (old navigator.mozHour12 in platform)

2015-10-09 Thread zbraniecki
On Friday, October 9, 2015 at 6:43:02 AM UTC-7, Ehsan Akhgari wrote:
> On 2015-10-08 8:27 PM, zbranie...@mozilla.com wrote:
> > Currently, any app that needs any of that information, has to get elevated 
> > privileges to let it *set* them, while almost every app that works with UI 
> > will just want to retrieve that.
> 
> As long as you want to do something b2g specific (which from the content 
> of your next email I gather is what you're trying to do now) why not 
> just solve this one issue and keep using mozSettings?

If by one issue you mean "read-only access to selected settings without 
elevated privileges" then I'm totally in.

If you mean "solve the hour12 and let's deal with measure units and weather 
units and first day of the week separately" then I'm afraid that it will each 
time be the same hassle.
Also, even with this one hour12 thing I'm in limbo between "just do this in 
your platform for now" and "standardize it, but design it well which will take 
a lot of time".

So, what is your suggestion?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Read only non-privileges access to mozSettings (old navigator.mozHour12 in platform)

2015-10-08 Thread zbraniecki
In a couple threads we've been debating back and forth about what we currently 
have as navigator.mozHour12 in Firefox OS.

It's a tri-value setting (undefined, true, false) that we take from mozSettings 
to adjust our Clock to use user chosen setting.

For a while, I've been asking to get it in the platform and the response was 
that we should aim to standardize it.

The challenge with standardizing it is that it is just one of many values that 
we will want to have. We can standardize just this single variable, but soon, 
we will have to standardize another, and then another and so on.

When I brought it up asking to design the API to flexible for future values, it 
instantly scaled up complexity for some participants.

So, I spent some time planning and thinking about it, and I came up with a set 
of user-defined variables that we should expose to web authors, and a proposal 
for an API to do so.

The variables:

 ! hour 12/24 clock (undefined, true, false)
 ! first day of the week (integer 0-6)
 ! weather unit (celsius, fahrenheit, kelvin)
 ! distance units (metric, imperial)
 - weekendStarts (integer 0-6)
 - weekendEnds (integer 0-6)
 - show seconds (undefined, true, false)
 - calendar (string - gregorian, buddhist, coptic etc.)
 - currentTimezone
 - currency
 - sorting settings

And those are only l10n/intl related ones. I can imagine that other areas might 
have similar needs:

 - accessibility (high-contrast, reverse colors etc.)
 - parental controls (hide explicit content)
 - notifications (lots of opportunity here)
 - sounds settings (mute, vibrate, level for different sound types)

and even some things that we already expose in various ways that would really 
fit as part of that API:

 - network status (online, offline, download speed, upload speed)

Not all of them are needed now, I marked the ones we should expose now with 
"!", but those are the kind of values that we may want at some point to:
 - allow users to set in Settings
 - have some automatic value for that may be dynamically computed (like, 
default hour12 depends on language settings)
 - allow apps to retrieve

I understand that we want to start small, but I'm confident that we should for 
the 4 settings market by me with "!" start looking for a solution.

And I believe that it shouldn't be that we expose each variable separately, on 
navigator. I believe that we should plan it as a counterpart to mozSettings API 
because it is, after all, read-only view of user settings.

Currently, any app that needs any of that information, has to get elevated 
privileges to let it *set* them, while almost every app that works with UI will 
just want to retrieve that.

Can we get something for our platform for now so that we can increase security 
and move forward with allowing ourselves and third-party devs create good UX in 
Firefox OS, and then merge this feature with our work to standardize 
mozSettings?

Thanks,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Read only non-privileges access to mozSettings (old navigator.mozHour12 in platform)

2015-10-08 Thread zbraniecki
I promised a proposal:

navigator.mozSettings.get('locale.hour12').then(value => {
  console.log('value for locale.hour12 is: ' + value);
});

This would be asynchronous, and only available for a small set of variables 
(whitelisted).

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: API request: MutationObserver with querySelector

2015-10-08 Thread zbraniecki
We're about to start working on another API for the next Firefox OS, this time 
for DOM Intl, that will operate on `data-intl-format`, `data-intl-value` and 
`data-intl-options`.

It would be much easier for us to keep l10n and intl separately and 
independently, but in the current model we will have two MutationObservers 
reporting everything that happens on document.body just to fish for elements 
with those attributes. Twice.

So we may have to introduce a single mutation observer to that handles that for 
both, which will be a bad design decision but improve performance.

I Reported it a month ago and so far no response. What's my next step to get 
this in our platform?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform