Re: Dogfooding Warp

2020-09-24 Thread David Teller

That's an impressive speedup!

Congrats on enabling this, everyone.

On 24/09/2020 14:56, Jan de Mooij wrote:

Warp is now enabled by default on Nightly, after positive feedback
from users dogfooding it [0,1].

Here are just a few of the Talos/Raptor graphs showing improvements
when Warp landed:

- 20% on Win64 GDocs loadtime: https://mzl.la/3cp6dAs
- 13% on Android Reddit SpeedIndex: https://mzl.la/2RUWdp8
- 18% on pdfpaint: https://mzl.la/2HtXb9W
- 8% on tp6 JS memory: https://mzl.la/3j2VwGb
- 8% on damp (devtools perf): https://mzl.la/3kLbhSM

Please let us know if you notice any improvements or regressions.

Thanks,
The Warp team

[0] 
https://www.reddit.com/r/firefox/comments/itib6s/dogfooding_warp_on_nightly_new_js_jit_engine/
[1] 
https://www.reddit.com/r/firefox/comments/iy2036/nightly_is_finally_feeling_as_fast_as_chromium/

On Tue, Sep 15, 2020 at 2:57 PM Jan de Mooij  wrote:

Hi all,

The SpiderMonkey (JS) team has been working on a significant update to
our JITs called WarpBuilder (or just Warp) [0,1]. Before we enable
Warp by default in Nightly (hopefully next cycle in 83) we need your
help dogfooding it.

Warp improves performance by reducing the amount of internal type
information that is tracked, optimizing for a broader spectrum of
cases, and by leveraging the same CacheIR optimizations used by last
year’s BaselineInterpreter work [2]. As a result, Warp has a much
simpler design and improves responsiveness and page load performance
significantly (we're seeing 5-15% improvements on many visual metrics
tests). Speedometer is about 10% faster with Warp. The JS engine also
uses less memory when Warp is enabled.

To enable Warp in Nightly:

1. Update to a recent Nightly
2. Go to about:config and set the "javascript.options.warp" pref to true
3. Restart the browser

We're especially interested in stability issues and real-world
performance problems. Warp is currently slower on various synthetic JS
benchmarks such as Octane (which we will continue investigating in the
coming months) but should perform well on web content.

If you find any issues, please file bugs blocking:

https://bugzilla.mozilla.org/show_bug.cgi?id=1613592

If you notice any improvements, we'd love to hear about those too.

Finally, we want to thank our amazing contributors André Bargull and
Tom Schuster for their help implementing and porting many
optimizations.

Turning Warp on is only our first step, and we expect to see a lot of
new optimization work over the next year as we build on this. We are
excited for what the future holds here.

Thanks!
The Warp team

[0] WarpBuilder still utilizes the backend of IonMonkey so we don't
feel it has earned the WarpMonkey name just yet.
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1613592
[2] 
https://hacks.mozilla.org/2019/08/the-baseline-interpreter-a-faster-js-interpreter-in-firefox-70/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: FTP protocol implementation

2020-03-19 Thread David Teller
Out of curiosity, what external application? OS-specific?

On 19/03/2020 01:24, Michal Novotny wrote:
> We plan to remove FTP protocol implementation from our code. This work
> is tracked in bug 1574475 [1]. The plan is to
> 
> - place FTP behind a pref and turn it off by default on 77 [2]
> - keep FTP enabled by default on 78 ESR [3]
> - remove the code completely at the beginning of 2021
> 
> We're doing this for security reasons. FTP is an insecure protocol and
> there are no reasons to prefer it over HTTPS for downloading resources.
> Also, a part of the FTP code is very old, unsafe and hard to maintain
> and we found a lot of security bugs in it in the past. After disabling
> FTP in our code, the protocol will be handled by external application,
> so people can still use it to download resources if they really want to.
> However, it won't be possible to view and browse directory listings.
> 
> 
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1574475
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1622409
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1622410
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecation of NS_NewNamedThread

2020-03-02 Thread David Teller
That's cool!

I wonder if there is (or will be) a way to somehow preserve the naming
part of NS_NewNamedThread, which is sometimes precious for debugging,
i.e. somehow attach to the background thread a debugging information in
addition to the stack that would let us analyze what the thread was
attempting to do in case of crash.

Is anything like this planned?

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to hg.mozilla.org access

2019-11-03 Thread David Teller
For what it's worth, when I last tried, I couldn't even `moz-phab
submit` a self-reviewed patch. I had to arbitrarily pick another
reviewer for a patch that was not meant for landing (it was a
demonstration of a reproducible bug in phabricator, but that's another
story).

Cheers,
 Yoric

On 03/11/2019 11:14, Emilio Cobos Álvarez wrote:
> On 11/2/19 12:53 PM, Andreas Tolfsen wrote:
>> Documentation changes have historically been well served by a “wiki
>> editing”/micro adjustments approach.  I wonder if there is anything
>> we can do with Phabricator to ease review requirements for documentation
>> changes from peers?
> 
> I think you can land patches without review even with Lando. I
> personally think that's acceptable for typo fixes / documentation
> updates / etc.
> 
> It's certainly a few more clicks than `git push` / `hg push` though.
> 
>  -- Emilio
> 
> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please aim to add informative messages to your exceptions

2019-09-14 Thread David Teller
Very good news!

Does this have any impact on SpiderMonkey error handling?

Cheers,
 David

On 14/09/2019 06:47, Boris Zbarsky wrote:
> Hello,
> 
> ErrorResult has two kinds of exception-throwing APIs on it: the older
> ones that don't allow specifying a custom message string, and newer ones
> that do.  People should use the newer ones where possible.
> 
> That means not using the following when throwing nsresults/DOMExceptions:
> 
>   ErrorResult::Throw(nsresult)
>   ErrorResult::operator=(nsresult)
> 
> and instead using:
> 
>   ErrorResult::ThrowDOMException(nsresult, const nsACString&)
> 
> which allows passing a message string that explains why the exception is
> being thrown.  Web developers will thank you and not post tweets like
> https://twitter.com/sebmck/status/1155709250225573889
> 
> When throwing TypeError or RangeError, ThrowTypeError/ThrowRangeError
> already require a message string, though I am making some changes in
> https://phabricator.services.mozilla.com/D45932 to make it a bit simpler
> to pass in custom message strings there.
> 
> Thank you all for making web developers' lives better,
> Boris
> 
> P.S. We currently have a _lot_ of uses of the
> no-informative-error-message APIs.  Some of these might be things we
> don't really expect web pages to hit (e.g. exceptions when in the wrong
> type of global).  But generally speaking all ~560 uses of
> operator=(nsresult) are code smells, as are the ~1000 uses of
> Throw(NS_ERROR_DOM_*).
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Prototype: Have window.outerHeight/outerWidth lie and report the innerHeight/innerWidth

2019-09-08 Thread David Teller
Have you checked that we don't use it internally in Firefox to e.g.
position tooltip menus? If so, we may need workarounds for the UI and
possibly WebExtensions.

Cheers,
 David

On 08/09/2019 06:57, Tom Ritter wrote:
> Summary:
> window.outerHeight/outerWidth are legacy properties that report the
> size of the outer window of the browser. By subtracting against
> innerHeight/innerWidth it exposes the size of the user's browser
> chrome which can be unique depending on customization, but at the
> least reveals non-standardized information that can be used for
> fingerprinting purposes.
> 
> I have a hard time figuring out how a website would use it for
> (legitimate|reasonable) rendering purposes. I discussed it with Anne
> and we'd like to neuter it and see if we can remove this
> fingerprintable information if possible.
> 
> Tor Browser (and RFP mode) has reported the values of
> innerHeight/innerWidth for outerHeight/outerWidth for a long time and
> I haven't seen or heard of any breakage caused as a result of that.
> 
> (We'll also need to spoof window.screenX and window.screenY as
> window.mozInnerScreenX and window.mozInnerScreenY respectively.)
> 
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1579584
> Standard: https://www.w3.org/TR/cssom-view-1/#dom-window-outerwidth
> Platform coverage: All, although TBH I don't know how this behaves on 
> Android...
> 
> Preference: Yes, this will be controlled by a preference that I'll
> flip for Nightly for now and watch for reports of breakage.
> 
> DevTools bug: n/a
> Other browsers: I haven't proposed this to any other browsers.
> web-platform-tests: I don't believe any WPT actually test for the
> correct value here.
> Secure contexts: This will be applicable everywhere
> 
> I considered adding telemetry for the properties; but reading them
> doesn't imply websites are relying on them for anything.
> 
> -tom
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Naming parameters in lambda expressions

2019-09-06 Thread David Teller
I'm sure that Searchfox could have useful highlights.

However, as you guessed, this was something that happened within an
editor + debugger, so there's only so much we can do in this direction.

Cheers,
 David

On 06/09/2019 15:40, Andrew Sutherland wrote:
> On 9/6/19 7:31 AM, David Teller wrote:
>> For what it's worth, I recently spent half a day attempting to solve a
>> bug which would have been trivial if `a` and `m` prefixes had been
>> present in that part of the code.
>>
>> While I find these notations ugly, they're also useful.
> 
> 
> Is this something searchfox could have helped with by annotating the
> symbol names via background-color, iconic badge, or other means?  Simon
> and I have been discussing an optional emacs glasses-mode style of
> operation which so far would allow for:
> 
> - expansion of "auto" to the actual underlying inferred type. "auto"
> would still be shown, and the expanded type would be shown in a way that
> indicates it's synthetic like being placed in parentheses and rendered
> in italics.
> 
> - inlining of constants.
> 
> 
> Searchfox does already highlight all instance of a symbol when it's
> hovered over, or optionally made sticky from the menu (thanks, :kats!),
> but more could certainly be done here.  The question is frequently how
> to provide the extra information without making the interface too busy.
> 
> But of course, if this was all being done from inside an editor or a
> debugger, no matter what tricks searchfox can do, they can't help you
> elsewhere.
> 
> 
> Andrew
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Naming parameters in lambda expressions

2019-09-06 Thread David Teller
For what it's worth, I recently spent half a day attempting to solve a
bug which would have been trivial if `a` and `m` prefixes had been
present in that part of the code.

While I find these notations ugly, they're also useful.

Cheers,
 David

On 06/09/2019 12:57, Honza Bambas wrote:
> On 2019-09-05 23:14, Emilio Cobos Álvarez wrote:
>> Yeah, let's not add a new prefix please.
>>
>> I don't like aFoo either, though it's everywhere so consistency is
>> better than nothing :/.
>>
>> That being said, it shouldn't be hard to write some clang plugin or
>> such that automatically renames function arguments to stop using aFoo,
>> should we want to do that... Just throwing it in the air, and
>> volunteering if we agreed to do that ;)
> 
> I personally find it useful (the 'a' prefix) same as the 'm' prefix. 
> When I trace back where from an argument is coming, when it bubbles down
> few functions, it's good to see when it changes from 'aArg' to 'arg' ->
> ah, here we set the value!
> 
> -hb-
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Improvements to infrastructure underpinning `firefox-source-docs`

2019-08-27 Thread David Teller
That sounds useful :)

Do we have any documentation on how to add documentation?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: non-const reference parameters in new and older code

2019-07-22 Thread David Teller
I believe in least surprise for the caller of an API. This seems to
match with the Google style, as you describe it: any parameter which may
be mutated in any manner should be passed as pointer, rather than reference.

Cheers,
 David

On 22/07/2019 08:43, Karl Tomlinson wrote:
> https://google.github.io/styleguide/cppguide.html#Reference_Arguments
> has a simple rule to determine when reference parameters are
> permitted:
> "Within function parameter lists all references must be const."
> This is consistent with Mozilla's previous coding style:
> "Use pointers, instead of references for function out parameters,
> even for primitive types." [1]
> However, across Gecko there are different interpretations of what
> "out" parameter means.
> 
> The Google style considers a parameter to be an out parameter if
> any of its state may be mutated by the callee.
> In some parts of Gecko, a parameter is considered an out parameter
> only if the callee might make wholesale changes to the state of
> parameter.  Well before the announcement to switch to Google style,
> this interpretation was discussed in 2017 [2], with subsequent
> discussion around which types were suitable as non-const reference
> parameters.
> 
> I'm asking how should existing surrounding code with some
> different conventions affect when is it acceptable to follow
> Google style for reference or pointer parameters?
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-03 Thread David Teller
The Google style sounds pretty good to me.

On 04/07/2019 07:11, Gerald Squelart wrote:
> Recently I coded something with a not-very-important slow-changing 
> rarely-used positive number: `unsigned mGeneration;`
> My reviewer commented: "Please use a type with an explicit size, such as 
> uint32_t. (General mozilla style; you don't see a bare "unsigned" around 
> much)"
> 
> I had never heard of this (in 4+ years), so I did a bit of research:
> 
> - I found plenty of `unsigned`s around, more than `uint32_t`s.
> 
> - I can't see anything about that in our coding style guides [1, 2].
> 
> - Our latest coding style [1] points at Google's, which has a section about 
> Integer Types [3], and the basic gist is: Use plain `int` for "not-too-big" 
> numbers, int64_t for "big" numbers, intXX_t if you need a precise size; never 
> use any unsigned type unless you work with bitfields or need 2^N overflow (in 
> particular, don't use unsigned for always-positive numbers, use signed and 
> assertions instead).
> 
> So, questions:
> 1. Do we have a written style I missed somewhere?
> 2. Do we have an unwritten style? (In which case I think we should write it 
> down.)
> 3. What do we think of the Google style, especially the aversion to unsigned?
> 
> Cheers,
> Gerald
> 
> 
> [1] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> [2] https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code
> [3] https://google.github.io/styleguide/cppguide.html#Integer_Types
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running C++ early in shutdown without an observer

2019-06-10 Thread David Teller



On 10/06/2019 10:28, Henri Sivonen wrote:
>>> Observers are automatically cleaned up at XPCOM shutdown, so you
>>> generally don't need to worry too much about them. That said,
>>> nsIAsyncShutdown is really the way to go when possible. But it currently
>>> requires an unfortunate amount of boilerplate.
> 
> Thanks. (nsIAsyncShutdown indeed looks like it involves a lot of boilerplate.)

I'll be happy to review patches that scrap the boilerplate :)

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running C++ early in shutdown without an observer

2019-06-07 Thread David Teller
Even on Desktop, we needed to move some cleanup to startup, in case the
process was killed by the OS.

On 07/06/2019 20:40, Chris Peterson wrote:
> On 6/7/2019 9:36 AM, Kris Maglione wrote:
>> On Fri, Jun 07, 2019 at 09:18:38AM +0300, Henri Sivonen wrote:
>>> For late shutdown cleanup, we have nsLayoutStatics::Shutdown(). Do we
>>> have a similar method for running things as soon as we've decided that
>>> the application is going to shut down?
>>>
>>> (I know there are observer topics, but I'm trying to avoid having to
>>> create an observer object and to make sure that _it_ gets cleaned up
>>> properly.)
>>
>> Observers are automatically cleaned up at XPCOM shutdown, so you
>> generally don't need to worry too much about them. That said,
>> nsIAsyncShutdown is really the way to go when possible. But it
>> currently requires an unfortunate amount of boilerplate.
> 
> Note that on Android, you may never get an opportunity for a clean
> shutdown because the OS can kill your app at any time.
> 
> I don't know what is the recommendation for shutdown activities on
> Android. The GeckoView team has had some recent bugs caused by shutdown
> tasks not running (e.g. committing cached font files or deleting temp
> files). I think these tasks were moved to startup or scheduled to run
> periodically.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running C++ early in shutdown without an observer

2019-06-07 Thread David Teller
Have you looked at nsIAsyncShutdown?

On 07/06/2019 08:18, Henri Sivonen wrote:
> For late shutdown cleanup, we have nsLayoutStatics::Shutdown(). Do we
> have a similar method for running things as soon as we've decided that
> the application is going to shut down?
> 
> (I know there are observer topics, but I'm trying to avoid having to
> create an observer object and to make sure that _it_ gets cleaned up
> properly.)
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-26 Thread David Teller
I find them extremely useful, too (as in "removing them would make my
life miserable in quite a few bugs"). I have no problem with putting
them on a separate line.

Cheers,
 David

On 26/01/2019 15:19, Jonathan Watt wrote:
> Personally I find them useful. Putting them on a separate line seems 
> reasonable
> to me.
> 
> Jonathan
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How do we land `./mach vendor rust` patches, these days?

2019-01-18 Thread David Teller
Hi everybody,

 My last two attempts to update our crates with `./mach vendor rust`
failed, not during vendoring, but when I attempted to upload the patch.
Both times, moz-phab/arcanist or phabricator simply choked during the
call and I gave up after waiting 24h for the patch to be uploaded.

Do we have a better way to do this? Or should I use splinter for such
patches?

Cheers,
 Yoric
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Signals in Firefox

2018-11-21 Thread David Teller
Thanks for the suggestions.

Given that they are on an academic deadline and they have already
implemented the feature using straight inotify and a monitor thread, I'd
favor a lesser refactoring with just removing the signals.

Cheers,
 David

On 21/11/2018 22:06, Mike Hommey wrote:
> On Wed, Nov 21, 2018 at 10:22:38AM -0500, Nathan Froyd wrote:
>> On Wed, Nov 21, 2018 at 4:45 AM David Teller  wrote:
>>> What is our policy on using Unix signals on Firefox? I am currently
>>> reviewing a patch by external contributors that involves inotify's
>>> signal API, and I assume it's a bad idea, but I'd like to ask around
>>> first before sending them back to the drawing board.
>>
>> I don't think we have a policy, per se; certainly we already have uses
>> of signals in the JS engine's wasm implementation and the Gecko
>> profiler.  But in those cases, signals are basically the only way to
>> do what we want.  If there were alternative ways to accomplish those
>> tasks besides signals, I think we would have avoided signals.
>>
>> inotify looks like it has a file descriptor-based interface which
>> seems perfectly usable.  Not being familiar with inotify beyond
>> reading http://man7.org/linux/man-pages/man7/inotify.7.html, is there
>> a reason to prefer the signal interface versus the file descriptor
>> interface?  We use the standard gio/gtk event loop, so hooking up the
>> returned file descriptor from inotify_init should not be onerous.
>> widget/gtk/nsAppShell.cpp even contains some code to crib from:
>>
>> https://searchfox.org/mozilla-central/source/widget/gtk/nsAppShell.cpp#275-281
> 
> I'd go one step further. We use Gio from libglib, is there a reason not
> to use the GFileMonitor API, which wraps inotify?
> 
> https://developer.gnome.org/gio/stable/GFileMonitor.html
> 
> Mike
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Signals in Firefox

2018-11-21 Thread David Teller
Dear platformers,

What is our policy on using Unix signals on Firefox? I am currently
reviewing a patch by external contributors that involves inotify's
signal API, and I assume it's a bad idea, but I'd like to ask around
first before sending them back to the drawing board.

Cheers,
 Yoric
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows launcher process enabled by default on Nightly

2018-09-27 Thread David Teller
   Hi Aaron,

 It sounds cool, but I'm trying to understand what it means :) Do I
understand correctly that the main benefit is security?

Cheers,
 David

On 27/09/2018 17:19, Aaron Klotz wrote:
> Hi everybody,
> 
> Yesterday evening bug 1488554 [1] merged to mozilla-central, thus
> enabling the launcher process by default on Windows Nightly builds. This
> change is at the build config level.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


License of test data?

2018-04-24 Thread David Teller
Ideally, I'd like to put a few well-known frameworks in jsapi tests, to
be used as data for SpiderMonkey integration tests.

What's our policy for this? Are there any restrictions? All the
frameworks I currently have at hand are have either an MIT- or an
MIT-like license, so in theory, we need to copy the license somewhere in
the test repo, right?

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Early, experimental support for application/javascript+binast

2018-04-18 Thread David Teller
No plans yet, but it's a good idea. The only reason not to do this (that
I can think of) is that we might prefer switching to the Bytecode Cache,
which would probably give us even better speed ups.

I understand that we can't use the Bytecode Cache for our chrome code
yet due to the fact that it uses a very different path in Necko, which
is the Source of Truth for the Bytecode Cache, but I may be wrong, and
it might be fixable.

Cheers,
 David

On 18/04/2018 19:09, Dave Townsend wrote:
> This is awesome. I understand that we already do some kind of
> pre-compile for our chrome code, is there any plan/benefit to switch to
> this eventually there?

> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: Early, experimental support for application/javascript+binast

2018-04-18 Thread David Teller
# Summary

JavaScript parsing and compilation are performance bottlenecks. The
JavaScript Binary AST is a domain-specific content encoding for
JavaScript, designed to speed up parsing and compilation of JavaScript,
as well as to allow streaming compilation of JavaScript (and possibly
streaming startup interpretation).

We already get a 30-50% parsing improvement by just switching to this
format, without any streaming code optimization, and we believe that we
can go much further. We wish to implement
`application/javascript+binast` so as to start experiments with partners.


# Bug

Bug 1451344


# Link to standard

This content encoding is a JS VM technology, with an entry point for
loading in the DOM.

- DOM level: No proposal yet.
https://github.com/binast/ecmascript-binary-ast/issues/27
- JS level (high): https://binast.github.io/ecmascript-binary-ast/
- JS level (low): No proposal yet.
https://binast.github.io/binjs-ref/binjs_io/multipart/index.html#overview


# Platform coverage

All.


# Estimated or target release

For the moment, no target release. We are still in the experimentation
phase.


# Preference behind which this will be implemented

dom.script.enable.application_javascript_binast


# Is this feature enabled by default in sandboxed iframes?

This is just a compression format, should make no change wrt security.

# DevTools bug

Bug 1454990


# Do other browser engines implement this?

Not yet. We are still in the experimentation phase.


# web-platform-tests

No web platform specification yet.


# Secure contexts

Let's restrict this to secure contexts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Prefs overhaul

2018-03-12 Thread David Teller
Out of curiosity, why is the read handled by C++ code?

On 12/03/2018 10:38, Nicholas Nethercote wrote:
> I don't know. But libpref's file-reading is done by C++ code, which passes
> a string to the Rust code for parsing.
> 
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Who can review licenses these days?

2018-03-10 Thread David Teller


On 09/03/2018 19:39, Gregory Szorc wrote:
> On Fri, Mar 9, 2018 at 7:28 AM, David Teller  <mailto:dtel...@mozilla.com>> wrote:
> 
> I'll need a license review for a vendored Rust package. Who can perform
> these reviews these days?
> 
> 
> We have an allow list of licenses in our Cargo config. So if the license
> is already allowed, you can reference the crate in a Cargo.toml and
> `mach vendor rust` will "just work." Otherwise, we need to review the
> license before any action is taken.
> 
> **Only attorneys or someone empowered by them should review licenses and
> give sign-off on a license.**

Yes, that's exactly what I'm talking about. We have whitelisted BSD 3,
but I have a build-time dependency on vendored BSD 2 code.

> 
> You should file a Legal bug at
> https://bugzilla.mozilla.org/enter_bug.cgi?product=Legal. I think the
> component you want is "Product - feature." Feel free to CC me and/or
> Ted, as we've dealt with these kinds of requests in the past and can
> help bridge the gap between engineering and legal. We also know about
> how to follow up with the response (e.g. if we ship code using a new
> license, we need to add something to about:license).

Ok, thanks.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Who can review licenses these days?

2018-03-09 Thread David Teller
I'll need a license review for a vendored Rust package. Who can perform
these reviews these days?

Thanks,
 Yoric
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-02 Thread David Teller
Pretty complicated in the general case but might be simple in the case
of number overflow.

Also, while we shouldn't depend on the UI in libpref, could we send some
kind of event or observer notification that the UI could use to display
a detailed error message? It would be a shame if Firefox was broken and
impossible-to-diagnose because of a number overflow, for instance.

On 02/02/2018 14:42, Boris Zbarsky wrote:
> You mean pick up parsing again after hitting an error?   That sounds
> complicated...
> 
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Refactoring proposal for the observer service

2018-01-03 Thread David Teller
That would be great!

On 03/01/18 23:09, Gabriele Svelto wrote:
> TL;DR this is a proposal to refactor the observer service to use a
> machine-generated list of integers for the topics (disguised as enums/JS
> constants) instead of arbitrary strings.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-06 Thread David Teller
As a user, I would definitely love to have this.

I wanted to add something like that to about:performance, but at the
time, my impression was that we did not have sufficient platform data on
where allocations come from to provide something convincing.

Cheers,
 David

On 02/11/17 15:34, Randell Jesup wrote:
> [Note: I'm a tab-hoarder - but that doesn't really cause this problem]
> 
> tl;dr: we should look at something (roughly) like the existing "page is
> making your browser slow" dialog for website leaks.
> 
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-19 Thread David Teller
Btw, I believe that there is already support for reporting uncaught
errors  and that it is blocked by the lack of test harness support.

Cheers,
 David

On 18/10/17 19:37, Steve Fink wrote:
> My gut feeling is that you'd only want uncaught errors, and
> AutoJSAPI::ReportException is a better place than setPendingException. I
> don't know how common things like
> 
>   if (eval('nightlyOnlyFeature()')) { ... }
> 
> are, but they certainly seem reasonable. And you'd have to do a bunch of
> work for every one to decide whether the catch was appropriate or not.
> It may be worth doing too, if you could come up with some robust
> whitelisting mechanisms, but at least starting with uncaught exceptions
> seems more fruitful.
> 
> As for the Promise case, I don't know enough to suggest anything, but
> surely there's a way to detect those particular issues separately? Is
> there any way to detect if a finalized Promise swallowed an exception
> without "handling" it in some way or other, even if very heuristically?
> 
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller
This should be feasible.

Opening bug 1409852 for the low-level support.

On 18/10/17 22:22, Dan Mosedale wrote:
> Could we do this on a per-module opt-in basis to allow for gradual
> migration?  That is to say, assuming there's enough information in the
> stack to tell where it was thrown from (I'm guessing that's the case
> most of the time), by default, ignore these errors unless they're from
> code in an opted-in module?
> 
> Dan
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller


On 18/10/17 14:16, Boris Zbarsky wrote:
> On 10/18/17 4:28 AM, David Teller wrote:
>> 2/ basically impossible to diagnose in the wild, because there was no
>> error message of any kind.
> 
> That's odd.  Was the exception caught or something?  If not, it should
> have shown up in the browser console, at least

In this case, the browser console couldn't be opened. Also, as suggested
by gps, we can probably reuse the same (kind of) mechanism to report
stacks of programming errors in the wild.

> 
>> I have one proposal. Could we change the behavior of the JS VM as
>> follows?
> 
> Fwiw, the JS VM doesn't really do exception handling anymore; we handle
> all that in dom/xpconnect code.

Mmmh... I was looking at setPendingException at
http://searchfox.org/mozilla-central/source/js/src/jscntxtinlines.h#435
. Can you think of a better place to handle this?

>> - The changes affect only Nightly.
>> - The changes affect only mozilla chrome code (including system add-ons
>> but not user add-ons or test code).
> 
> What about test chrome code?  The browser and chrome mochitests are
> pretty hard to tell apart from "normal" chrome code...

Good question. I'm not sure yet. I actually don't know how the tests are
loaded, but I hope that there is a way. Also, we need to test: it is
possible that the code of tests might not be a (big) problem.

>> - Any programmer error (e.g. SyntaxError) causes a crash a crash that
>> displays (and attaches to the CrashReporter) both the JS stack in and
>> the native stack.
> 
> We would have to be a little careful to only include the chrome frames
> in the JS stack.
> 
> But the more important question is this: should this only apply to
> _uncaught_ errors, or also to ones inside try/catch?  Doing the former
> is pretty straightforward, actually.  Just hook into
> AutoJSAPI::ReportException and have it do whatever work you want.  It
> already has a concept of "chrome" (though it may not match the
> definition above; it basically goes by "system principal or not?") and
> should be the bottleneck for all uncaught exceptions, except:
> 
> * Toplevel evaluation errors (including syntax errors) in worker scripts.
> * "uncaught" promise rejections of various sorts
> 
> Doing this might be a good idea.  It's _definitely_ a good experiment...

My idea would be to do it even on caught errors. It is too easy to catch
errors accidentally, in particular in Promise.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller
> I'm not sure changing behavior of the JS VM is the proper layer to
> accomplish this. I think reporting messages from the JS console is a
> better place to start. We could change the test harnesses to fail tests
> if certain errors (like SyntaxError or TypeError) are raised. If there
> is a "hook" in the JS VM to catch said errors at error time, we could
> have the test harnesses run Firefox in a mode that makes said errors
> more fatal (even potentially crashing as you suggest) and/or included
> additional metadata, such as stacks.

Ok, I discussed this with jorendorff.

It shouldn't be too hard to add this hook, plus it should have basically
no overhead. The next step would be to register a test harness handler
to crash (or do something else).

This would later open the door to reporting errors (possibly through
crashing) from Nightly, Beta, Release, ...

My main worry, at this stage, is what we encountered when we started
flagging uncaught async errors: some module owners simply never fixed
their errors, so we had to whitelist large swaths of Firefox code,
knowing that it was misbehaving.


Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller


On 18/10/17 10:45, Gregory Szorc wrote:
> I agree that errors like this should have better visibility in order to
> help catch bugs.
> 
> I'm not sure changing behavior of the JS VM is the proper layer to
> accomplish this. I think reporting messages from the JS console is a
> better place to start. We could change the test harnesses to fail tests
> if certain errors (like SyntaxError or TypeError) are raised. If there
> is a "hook" in the JS VM to catch said errors at error time, we could
> have the test harnesses run Firefox in a mode that makes said errors
> more fatal (even potentially crashing as you suggest) and/or included
> additional metadata, such as stacks.

Works for me. I'd need to check how much performance this would cost.

> Another idea would be to require all non-log output in the JS console to
> be accounted for. Kinda like reftest's expected assertion count.
> Assuming most JS errors/warnings are unwanted, this would allow us to
> fail all tests reporting JS errors/warnings while allowing wanted/known
> failures to not fail the test. A problem though is background services
> "randomly" injecting their output during test execution depending on
> non-deterministic timing. It could be difficult to roll this out in
> practice. But it feels like we should be able to filter out messages or
> stacks accordingly.

This looks like a much larger undertaking.

> But why stop there? Shouldn't Firefox installs in the wild report JS
> errors and warnings in Firefox code back to Mozilla (like we do
> crashes)? I know this has been discussed. I'm not sure what we're
> doing/planning about it though.

I would be for it, but as a followup.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


We need better canaries for JS code

2017-10-18 Thread David Teller
Hi everyone,

  Yesterday, Nightly was broken on Linux and MacOS because of a typo in
JS code [1]. If I understand correctly, this triggered the usual
"undefined is not a function", which was

1/ uncaught during testing, as these things often are;
2/ basically impossible to diagnose in the wild, because there was no
error message of any kind.

I remember that we had bugs of this kind lurking for years in our
codebase, in code that was triggered daily and that everybody believed
to be tested.

I'd like to think that there is a better way to handle these bugs,
without waiting for them to explode in our user's face. Opening this
thread to see if we can find a way to somehow "solve" these bugs, either
by making them impossible, or by making them much easier to solve.

I have one proposal. Could we change the behavior of the JS VM as follows?

- The changes affect only Nightly.
- The changes affect only mozilla chrome code (including system add-ons
but not user add-ons or test code).
- Any programmer error (e.g. SyntaxError) causes a crash a crash that
displays (and attaches to the CrashReporter) both the JS stack in and
the native stack.
- Any SyntaxError is a programmer error.
- Any TypeError is a programmer error.

I expect that this will find a number of lurking errorsy, so we may want
to migrate code progressively, using a directive, say "use strict
moz-platform" and static analysis to help encourage using this directive.

What do you think?

Cheers,
 David



[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1407351#c28
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JavaScript Binary AST Engineering Newsletter #1

2017-08-18 Thread David Teller
Hey, all cool kids have exciting Engineering Newsletters these days, so
it's high time the JavaScript Binary AST got one!


# General idea

JavaScript Binary AST is a joint project between Mozilla and Facebook to
rethink how JavaScript source code is stored/transmitted/parsed. We
expect that this project will help visibly speed up the loading of large
codebases of JS applications, including web applications, and will have
a large impact on the JS development community, including both web
developers, Node developers, add-on developers and ourselves.


# Context

The size of JavaScript-based applications – starting with webpages –
keep increasing, while the parsing speed of JavaScript VMs has basically
peaked. The result is that the startup of many web/js applications is
now limited by JavaScript parsing speed. While there are measures that
JS developers can take to improve the loading speed of their code, many
applications have reached a situation in which such an effort becomes
unmanageable.

The JavaScript Binary AST is a novel format for storing JavaScript code.
The global objective is to decrease the time spent between
first-byte-received and code-execution-starts. To achieve this, we focus
on a new file format, which we hope will aid our goal by:

- making parsing easier/faster
- supporting parallel parsing
- supporting lazy parsing
- supporting on-the-fly bytecode generation
- decreasing file size.

For more context on the JavaScript Binary AST and alternative
approaches, see the companion high-level blog post [1].


# Progress

## Benchmarking Prototype

The first phase of the project was spent developing an early prototype
format and parser to validate our hypothesis that:

- we can make parsing much faster
- we can make lazy parsing much faster
- we can reduce the size of files.

The prototype built for benchmarking was, by definition, incomplete, but
sufficient to represent ES5-level source code. All our benchmarking was
performed on snapshots of Facebook’s chat and of the JS source code our
own DevTools.

While numbers are bound to change as we progress from a proof-of-concept
prototype towards a robust and future-proof implementation, the results
we obtained from the prototype are very encouraging.

- parsing time 0.3 (i.e. parsing time is less than a third of original time)
- lazy parsing time *0.1
- gzipped file size vs gzipped human-readable source code *0.3
- gzipped file size vs gzipped minified source code *0.95.

Please keep in mind that future versions may have very different
results. However, these values confirm that the approach can
considerably improve performance.

More details in bug 1349917.


## Reference Prototype

The second phase of the project consisted of building a second prototype
format designed to:

- support future evolutions of JavaScript
- support annotations designed to allow safe lazy/concurrent parsing
- serve as a reference implementation for third-party developers

This reference prototype has been implemented (minus compression) and is
currently being tested. It is entirely independent from SpiderMonkey and
uses Rust (for all the heavy data structure manipulation) and Node (to
benefit from existing parsing/pretty-printing tool Babylon). It is
likely that, as data structures stabilize, the reference prototype will
migrate to a full JS implementation, so as to make it easier for
third-party contributors to join in.

More details on the tracker [2].


## Standard tracks

As any proposed addition to the JavaScript language, the JavaScript
Binary AST needs to go through standards body.

The project has successfully been accepted as an ECMA TC39 Stage 1
Proposal. Once we have a working Firefox implementation and compelling
results, we will proceed towards Stage 2.

More details on the tracker [3].



# Next few steps

There is still lots of work before we can land this on the web.


## SpiderMonkey implementation

We are currently working on a SpiderMonkey implementation of the
Reference Prototype, initially without lazy or concurrent parsing. We
need to finish it to be able to properly test that JavaScript decoding
works.

## Compression

The benchmarking prototype only implemented naive compression, while the
reference prototype – which carries more data – doesn’t implement any
form of compression yet. We need to reintroduce compression to be able
to measure the impact on file size.



# How can I help?

If you wish to help with the project, please get in touch with either
Naveed Ihsanullah (IRC: naveed, mail: nihsanullah) or myself (IRC:
Yoric, mail: dteller).


[1] https://yoric.github.io/post/binary-ast-newsletter-1/
[2] https://github.com/Yoric/binjs-ref/
[3] https://github.com/syg/ecmascript-binary-ast/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nodejs for extensions ?

2017-07-31 Thread David Teller
Node dependency trees tend to be pretty large, so I'm a little concerned
here. Has the memory footprint be measured?

Cheers,
 David

On 31/07/17 19:45, Michael Cooper wrote:
> If you mean using modules from NPM in a browser add-on, the Shield client
> extension recently started doing this <
> https://github.com/mozilla/normandy/tree/master/recipe-client-addon>
> 
> We do this by using webpack to process the node modules, bundling the
> entire dependency tree of a library into a single file. We then add a few
> more bits to make the resulting file compatible with `Chrome.utils.import`.
> You can see the webpack config file here <
> https://github.com/mozilla/normandy/blob/master/recipe-client-addon/webpack.config.js>
> and the way we use the resulting files here <
> https://github.com/mozilla/normandy/blob/48a446cab33d3b261b87c3d509964987e044289d/recipe-client-addon/lib/FilterExpressions.jsm#L12
>>
> 
> We suspect that this approach won't be compatible with all Node libraries,
> because it is fairly naive. But it has worked well for the ones we've used
> (React, ReactDOM, ajv, and mozjexl, so far).
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-26 Thread David Teller
Well, at least there is the matter of feature detection, for people who
want to write code that will work in more than just Firefox.
moz-prefixing makes it clear that the feature can be absent on some
browsers.

Cheers,
 David

On 26/07/17 05:55, Martin Thomson wrote:
> On Wed, Jul 26, 2017 at 6:20 AM, Andrew Overholt  wrote:
>> On Tue, Jul 25, 2017 at 3:06 PM, David Teller  wrote:
>>> Should we moz-prefix moz-specific extensions?
>>
>> We have been trying not to do that for Web-exposed APIs but maybe the
>> extensions case is different?
> 
> I don't think that it is.  If there is any risk at all that someone
> else might want to use it, then prefixing will only make our life
> harder long term.  Good names are cheap enough that we don't need to
> wall ours off.
> 
> See also https://tools.ietf.org/html/rfc6648
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-25 Thread David Teller
Should we moz-prefix moz-specific extensions?

On 25/07/17 20:45, Jet Villegas wrote:
> Based on product plans I've heard, this sounds like the right approach. We
> should try to limit the scope of such browser-specific APIs but it's likely
> necessary in some cases (e.g., in the devtools.)
> 
> 
> On Tue, Jul 25, 2017 at 2:44 AM, Gabor Krizsanits 
> wrote:
> 
>> In my mind at least the concept is to share the API across all browsers
>> where we can, but WebExtensions should not be limited to APIs that are
>> accepted and implemented by all browser vendors.
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-23 Thread David Teller
Thanks for starting this conversation. I'd love to be able to use more
Rust in Firefox.

In SpiderMonkey, the main blocker I encounter is interaction with all
the nice utility classes we have in C++, in particular templatized ones.

Also, for the rest of Gecko, my main blocker was the lack of support for
Rust-implemented webidl in m-c, which meant that roughly 50% of the code
I would be writing would have been bug-prone adapters.

Cheers,
 David

On 10/07/17 12:29, Nicholas Nethercote wrote:
> Hi,
> 
> Firefox now has multiple Rust components, and it's on track to get a
> bunch more. See https://wiki.mozilla.org/Oxidation for details.
> 
> I think this is an excellent trend, and I've been thinking about how to
> accelerate it. Here's a provocative goal worth considering: "when
> writing a new compiled-code component, or majorly rewriting an existing
> one, Rust should be considered / preferred / mandated."
> 
> What are the obstacles? Here are some that I've heard.
> 
> - Lack of Rust expertise for both writing and reviewing code. We have
> some pockets of expertise, but these need to be expanded greatly. I've
> heard that there has been some Rust training in the Paris and Toronto
> offices. Would training in other offices (esp. MV and SF, given their
> size) be a good idea? What about remoties?
> 
> - ARM/Android is not yet a Tier-1 platform for Rust. See
> https://forge.rust-lang.org/platform-support.html and
> https://internals.rust-lang.org/t/arm-android-to-tier-1/5227 for some
> details.
> 
> - Interop with existing components can be difficult. IPDL codegen rust
> bindings could be a big help.
> 
> - Compile times are high, especially for optimized builds.
> 
> Anything else?
> 
> Nick
> 
> 
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JSBC: JavaScript Start-up Bytecode Cache

2017-06-13 Thread David Teller


On 6/13/17 5:37 PM, Nicolas B. Pierron wrote:
> Also, the chrome files are stored in the jar file (If I recall
> correctly), and we might want to generate the bytecode ahead of time,
> such that users don't have to go through the encoding-phase.

How large is the bytecode?

I suspect that if it's too large, we'll be okay with generating the
bytecode on the user's computer.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing our thread APIs for Quantum DOM scheduling

2017-05-19 Thread David Teller
Out of curiosity, how will this interact with nsCOMPtr thread-safe (or
thread-unsafe) refcounting?

Also, in code I have seen, `NS_IsMainThread` is used mainly for
assertion checking. I *think* that the semantics you detail below will
work, but do you know if there is a way to make sure of that?

Also, I had the impression that Quantum DOM scheduling made JS event
loop spinning unncessary. Did I miss something?

Cheers,
 David

On 5/19/17 1:38 AM, Bill McCloskey wrote:
> Hi everyone,
> 
> One of the challenges of the Quantum DOM project is that we will soon have
> multiple "main" threads [1]. These will be real OS threads, but only one of
> them will be allowed to run code at any given time. We will switch between
> them at well-defined points (currently just the JS interrupt callback).
> This cooperative scheduling will make it much easier to keep our global
> state consistent. However, having multiple "main" threads is likely to
> cause confusion.
> 
> To simplify things, we considered trying to make these multiple threads
> "look" like a single main thread at the API level, but it's really hard to
> hide something like that. So, instead, we're going to be transitioning to
> APIs that try to avoid exposing threads at all. This post will summarize
> that effort. You can find more details in this Google doc:
> 
> https://docs.google.com/document/d/1MZhF1zB5_dk12WRiq4bpmNZUJWmsIt9OTpFUWAlmMyY/edit?usp=sharing
> 
> [Note: I'd like this thread (and the Google doc) to be for discussing
> threading APIs. If you have more general questions about the project,
> please contact me personally.]
> 
> The main API change is that we're going to make it a lot harder to get hold
> of an nsIThread for the main thread. Instead, we want people to work with
> event targets (nsIEventTarget). The advantage of event targets is that all
> the main threads will share a single event loop, and therefore a single
> nsIEventTarget. So code that once expected a single main thread can now
> expect a single main event target.
> 
> The functions NS_GetMainThread, NS_GetCurrentThread, and
> do_Get{Main,Current}Thread will be deprecated. In their place, we'll
> provide mozilla::Get{Main,Current}ThreadEventTarget. These functions will
> return an event target instead of a thread.
> 
> More details:
> 
> NS_IsMainThread: This function will remain pretty much the same. It will
> return true on any one of the main threads and false elsewhere.
> 
> Dispatching runnables: NS_DispatchToMainThread will still work, and you
> will still be able to dispatch using Get{Main,Current}ThreadEventTarget.
> From JS, we want people to use Services.tm.dispatchToMainThread.
> 
> Thread-safety assertions: Code that used PR_GetCurrentThread for thread
> safety assertions will be converted to use NS_ASSERT_OWNINGTHREAD, which
> will allow code from different main threads to touch the same object.
> PR_GetCurrentThread will be deprecated. If you really want to get the
> current PRThread*, you can use GetCurrentPhysicalThread, which will return
> a different value for each main thread.
> 
> Code that uses NS_GetCurrentThread for thread safety assertions will be
> converted to use nsIEventTarget::IsOnCurrentThread. The main thread event
> target will return true from IsOnCurrentThread if you're on any of the main
> threads.
> 
> Nested event loop spinning: In the future, we want people to use
> SpinEventLoopUntil to spin a nested event loop. It will do the right thing
> when called on any of the main threads. We'll provide a similar facility to
> JS consumers.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding Rust code to Gecko, now documented

2017-01-26 Thread David Teller
Bug 1231711, but I never got to do it, unfortunately.

On 26/01/17 08:01, zbranie...@mozilla.com wrote:
> On Thursday, November 10, 2016 at 5:15:26 AM UTC-8, David Teller wrote:
>> Ok. My usecase is the reimplementation of OS.File in Rust, which should
>> be pretty straightforward and shave a few Mb of RAM and possibly a few
>> seconds during some startups. The only difficulty is the actual JS
>> binding. I believe that the only DOM object involved would be Promise,
>> I'll see how tricky it is to handle with a combo of Rust and C++.
> 
> Did you ever get to do this? Is there a bug?
> 
> zb.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your use cases for the Touch Bar on the new MacBook Pro?

2017-01-03 Thread David Teller
To build upon the "tab bar" idea: scrolling quickly among my 300+ tabs.

On 03/01/17 21:50, sev...@gmail.com wrote:
> Off the top of my head ideas:
> 
> Quick-access to the back, forward, refresh, bookmark, share buttons could be 
> a good. Tab bar might be handy too, so with a touch of my finger I can go to 
> the tab I want quickly. The bar could change depending on your screen to. If 
> you’re on YouTube/Netflix it could be player controls, or if on NewTab a row 
> of highlights of some type?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding Rust code to Gecko, now documented

2016-11-10 Thread David Teller
Ok. My usecase is the reimplementation of OS.File in Rust, which should
be pretty straightforward and shave a few Mb of RAM and possibly a few
seconds during some startups. The only difficulty is the actual JS
binding. I believe that the only DOM object involved would be Promise,
I'll see how tricky it is to handle with a combo of Rust and C++.

Thanks,
 David

On 10/11/16 02:43, Bobby Holley wrote:
> On Wed, Nov 9, 2016 at 12:31 PM, David Teller  <mailto:dtel...@mozilla.com>> wrote:
> 
> \o/
> 
> Do we already have a story for implementing WebIDL in Rust?
> 
> 
> In general, we decided that WebIDL objects need to remain C++, since
> they generally need to interact with the DOM and the extra complexity to
> support pure-Rust objects in Codegen isn't worth it. If there's
> functionality that makes sense to write in Rust, implement the core
> functionality in a Rust crate and just forward to it from the C++ DOM
> object.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding Rust code to Gecko, now documented

2016-11-09 Thread David Teller
\o/

Do we already have a story for implementing WebIDL in Rust?

Cheers,
 David

On 09/11/16 12:20, Ted Mielczarek wrote:
> I recently wrote some documentation on how to add Rust code to Gecko:
> http://gecko.readthedocs.io/en/latest/build/buildsystem/rust.html
> 
> It should be fairly straightforward for most use cases (thanks to Nathan
> Froyd for doing the lion's share of the work to make it so), but if
> there's anything that's unclear feel free to ask for clarification.
> 
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-28 Thread David Teller
\o/, let's join forces :)

I admit that I haven't thought at all about the impact on exceptions. If
we migrate to ES6 modules, then the problem is something that we don't
need to handle. If we migrate to CommonJS modules with a loader built
into XPConnect, I think that we can solve this without blood or pain.
More on this later.

I'm not entirely scared of lazy modules. There are obvious difficulties,
but I don't think that they are insurmountable.

I'll answer separately for CommonJS and ES6 modules, because the
problems we'll be facing are clearly different.

* CommonJS

As you mention in c), I'm pretty sure that we can trivially port
`defineLazyModuleGetter` to CommonJS, without changes to the client
code. See [1] for a few more details.

This will not be sufficient to get static analysis to understand lazy
imports, but I imagine that we can fix this as follows:

1. Introduce a `lazyRequire` function with the same signature and scope
as `require` and with the semantics of `defineLazyModuleGetter`.

2. Either teach our open-source linters that `lazyRequire` is `require`
or use Babel to rewrite `lazyRequire` to `require` before static analysis.



* ES6 modules

Indeed, ES6 modules don't like lazy imports. Theoretically, we could
port `defineLazyModuleGetter` using `processNextEvent` footgun magic,
but I would really hate to go in this direction.

I have put together in [2] a possible plan to migrate to ES6 modules
without breaking `defineLazyModuleGetter` – pending confirmation from
@jonco that this can be done. Essentially, past some point in the
migration, `Cu.import` becomes a sync version of the ES7's `import()`
function.

If this works (and it's a pretty big "if"), we can use more or less the
same steps as above.


Cheers,
 David


[1] https://gist.github.com/Yoric/777effee02d6788d3abc639c82ff4488
[2] https://gist.github.com/Yoric/2a7c8395377c7187ebf02219980b6f4d



On 28/09/16 00:42, Kris Maglione wrote:
> On Sun, Sep 25, 2016 at 12:13:41AM +0200, David Teller wrote:
>> So, can anybody think of good reason to not do this?
> 
> One major problem I see with this is that we currently lazily import
> most modules the first time that a symbol they export is referenced. 
> If
> we move to CommonJS or ES6 modules, we need to either:
> 
> a) Load essentially *all* of our Chrome JS at startup, before we even
> draw the first window. Maybe the static dependency handling of ES6
> modules would make that more tractable, but I'd be surprised.
>
> 
> b) Manually import modules whenever we need them. That might be doable
> in CommonJS or with the proposed future dynamic imports of ES6, but with
> a lot of additional cognitive overhead.
> 
> c) Use CommonJS, but with a lazy import helper. I wouldn't mind that
> approach so much, but I think that it would pretty much nullify any
> advantage for static analysis.
> 
> or,
> 
> d) Some hybrid of the above.
> 
> Frankly, I've been considering transitioning the code I work with to
> CommonJS for a while, mainly because easier for outside contributors to
> cope with (especially if they're used to Node). Cu.import tends to hide
> the names of the symbols it exports (which shows in how often our ESLint
> hacks fail to guess at what it exports), and even defineLazyModuleGetter
> takes some getting used to.
> 
> The main things that have been stopping me are the lack of support for
> lazy imports, and the unfortunate impact that the SDK loader has on
> debugging, with its mangling of exceptions, and the source URL mangling
> imposed by the subscript loader. But those problems can be overcome.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-27 Thread David Teller
I have posted a draft of a plan for migrating from JSM to ES6 modules here:

https://gist.github.com/Yoric/2a7c8395377c7187ebf02219980b6f4d

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-27 Thread David Teller
On 27/09/16 19:35, Zibi Braniecki wrote:
> On Tuesday, September 27, 2016 at 2:28:54 AM UTC-7, David Teller wrote:
>> If I understand ES6 modules correctly, two imports from the same webpage
>> will return the same module instance, right?
> 
> I don't think this is a correct statement across globals.
> 
> When you load two modules in one js context, maybe, but when you have two 
> browser.xul windows open and you load a JSM, it's shared between them.
>
>> How hard would it be to consider all chrome code (of a JSRuntime) as a
>> single webpage? That's pretty much a requirement for any module loader
>> we would use for our chrome code.
> 
> So this opens up an interesting can of worms.
> As we move into multi-process world, would we be interested in making our 
> module loading code make it less impossible to chunk chrome into separate 
> processes?

That's too many negations for my poor brain :)

I think that we want to keep the current behavior of
one-chrome-module-has-only-one-instance-per-JSRuntime. Anything else
will introduce impossible-to-track bugs. From what I read below in your
message, I believe that we agree.

>> I *think* that we can get rid all instances of the former, but I also
>> think that it's a multi-year project to do it all across our code.
> 
> I don't see how or why would we want to get rid of all instances of the 
> former.

"why": because you wrote "The former is a more tricky." in your previous
message. If it's not, I'm quite happy to not remove them :)

For reference, "the former" is a snippet such as:

if (needed) {
  Cu.import(...);
}

to which I would add

function foo() {
  Cu.import(...);
}


> 
> It seems to me that we use the nature of singletons quite a lot - for 
> example, I'm working on a replacement for chrome registry for l10n resources 
> and I use runtime global cache by just having a single cache object in my JSM.
> 
>> @zb, do you think that it would be possible to have a migration path
>> from jsm towards ES6 modules that would let us do it one module at a
>> time? Let's assume for the moment that we can rewrite `Cu.import` to
>> somehow expose ES6 modules as jsm modules.
> 
> I don't see a reason why wouldn't it be possible. We could even start by just 
> promoting the new method for new code.

For this, we need first to make sure that two distinct .jsm modules/.xul
files/whatever chrome stuff that can load a ES6 module will receive the
same object when loading the same url, right? This seems like a pretty
important first step.

>> Also, how would you envision add-ons (which could add some kind of
>> modules dynamically) in a world of ES6 modules?
> 
> I linked to the ECMA proposals that give us nested/conditional imports.
> I believe that we should go this route.

Ok. I think that works, but we should check with the addons team.

>> It is my understanding that ES6 modules, being designed for the web,
>> don't expect any kind of sync I/O, and "just" block `onload`.
>> Transitioning to ES6 modules would undoubtedly require some hackery here
>> to pause the calling C++ code.
> 
> Quite the opposite.
> 
> The first version of es6 modules is synchronous, even static, from the 
> perspective of the user.
>
> Only the import() function proposal introduces async way to load modules.

I'm talking of the perspective of the embedder (here, Gecko). Reading
the implementation of ES6 modules, I have the impression that loading is
sync from the perspective of the JS & DOM but async from the perspective
of the embedder. Am I wrong?

If so, I fear that we're going to end up in gotcha cases whenever C++
calls JS code.

> I see a huge value in both so I'd be happy if we implemented both internally 
> and through this participated in the evolution of the import function 
> proposal.
>  
> 
> personal preference would be to not settle on the intermittent module loading 
> API. If we want to move, let's go all the way and do this right.

Well, my personal preference is whatever doesn't require us to rewrite
the entire codebase of Firefox :)

> 
> One idea that came to mind for how could we differentiate between singleton 
> loading and in-context loading using statements would be to differentiate by 
> path.
> Not sure if it feels right or a dirty hack, but sth like this:
> 
> import { Registry } from 'resource://gre/modules/L10nRegistry.jsm'; 
> 
> would work like Cu.import,
> 
> import { Registry } from 'resource://gre/modules/my-file.js';
> 
> would load my-file in-context.
> 
> Does it sound like a cool way of solving it or a terrible way of complicating 
> it?

I think it complicates stuff. Among other things, we have code that's
designed to be loaded both as a jsm and as a CommonJS module, and I'm
pretty sure that this would break all sorts of havoc.

Cheers,
 David

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-27 Thread David Teller
You are right, I wrote RequireJS but I was thinking CommonJS, much as is
used currently in DevTools and Jetpack.

According to their documentation, Facebook's Flow analysis already
supports CommonJS modules [1]. Of course, they prefer ES6 modules. It
just remains to be seen whether we can migrate to these.

Cheers,
 David

[1] https://flowtype.org/docs/modules.html#_


On 27/09/16 17:00, David Bruant wrote:
> Le mardi 27 septembre 2016 14:49:36 UTC+2, David Teller a écrit :
>> I have opened bug 1305669 with one possible strategy for migrating
>> towards RequireJS.
> 
> RequireJS [1] is a peculiar choice for chrome code especially if your goal is 
> static analysis.

[...]

> On the topic of transitioning, I don't maintain the Firefox codebase, so feel 
> free to ignore anything I say below.
> But for one-time top-level imports, the ES6 syntax seems like a better bet 
> given from what I've read that they're supported in chrome and are the 
> end-game.
> As far as dynamic/conditional imports, there doesn't seem to be much value to 
> move from Cu.import() to require() given it's unlikely static analysis tools 
> will do anything with either anyway (I'm interested in being proven wrong 
> here though) and the standard module loader [2] will beg for another rewrite 
> eventually.
> 
> hope that helps,
> 
> David
> 
> [1] http://requirejs.org/
> [2] https://whatwg.github.io/loader/ & 
> https://github.com/whatwg/loader/pull/152/files
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-27 Thread David Teller


On 27/09/16 11:58, Gijs Kruitbosch wrote:
> On 27/09/2016 10:28, David Teller wrote:
>> How hard would it be to consider all chrome code (of a JSRuntime) as a
>> single webpage? That's pretty much a requirement for any module loader
>> we would use for our chrome code.
> 
> I don't see how you would do this, because the globals *would* be
> different for different windows (ie 2 copies of browser.xul windows),
> and for XPCOM components. Even if our module loader had magic that let
> this all happen without duplicating the modules themselves, it feels
> like all kinds of static analysis and tools that we'd be doing this for
> would break (because modules could never assume that |window| was a
> thing in their global, or that it was always the same, but the tools
> would assume that they could).

I don't follow.

Fwiw, I'm thinking of Facebook's Flow, which is designed for use with
Node.js (so, no `window`) and modules.

>> 3) The issue of loading the source code
>>
>> All module systems need to load their source before proceeding. If I
>> understand correctly, ES6 modules rely upon the same network stack as
>> the rest of Gecko to load the source code, while jsm rely only upon the
>> much more limited nsIJar* and nsILocalFile.
> 
> You've not really given enough detail here to explain why this is a
> problem. You can pass chrome and jar: URIs to an XHR (obviously you get
> security exceptions if you try this from the web...), and to
> NetUtil.newChannel, etc. etc. - it's not clear to me why it'd be a
> problem to use those to load the source code.

I'm talking about ES6 modules, which (if I read their code correctly)
use a built-in loading mechanism, already implemented in Gecko. Are you
talking of the same thing?

Of course, we could decide to write code using ES6 modules and compile
it away at build time. Is this what you had in mind?

>> Barring any mistake, some of our network stack is written in JS. @zb, do
>> you see any way to untangle this?
> 
> This would only be a problem if you needed the JS-y bits of the network
> stack to load those JS modules or components, which I don't think is the
> case - that would surely also cause problems if it was the case with
> Cu.import. Maybe I'm misunderstanding what problem you're trying to
> identify?

Well, Cu.import doesn't have this problem because it doesn't rely on any
JS code – the only I/O, in particular, is performed through nsIJarURL
and nsILocalFile, both of which are implemented in C++.

But yeah, I may be wrong. If Necko's C++ code can handle gracefully (and
without failing) the fact that some of Necko's JS code is not loaded
yet, this may not be a problem. I'm not familiar enough with that part
of the code.


Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-27 Thread David Teller
I have opened bug 1305669 with one possible strategy for migrating
towards RequireJS.

Cheers,
 David

On 25/09/16 01:16, Bobby Holley wrote:
> If the conversion is tractable and we end up with module ergonomics that
> frontend developers are happy with, I'm certainly in favor of this from
> the platform side. It would get us the 15-20MB of memory savings that
> bug 1186409 was pursuing without the smoke and mirrors.
> 
> bholley
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-27 Thread David Teller


On 26/09/16 19:50, zbranie...@mozilla.com wrote:
> So, it seems to me that we're talking about two aspects of module loading:
> 
> 
> 1) Singleton vs. per-instance
> 
> Cu.import allows us to share a single object between all the code that 
> references it.
> 
> ES6 modules are not meant to do that.

If I understand ES6 modules correctly, two imports from the same webpage
will return the same module instance, right?

How hard would it be to consider all chrome code (of a JSRuntime) as a
single webpage? That's pretty much a requirement for any module loader
we would use for our chrome code.

> 2) Conditional vs. static
> 
> Cu.import allows us to decide *when* we're loading the code for side-effects, 
> or even *if* we're going to load it at all.
> 
> if (needed) {
>   Cu.import(...);
> }
> 
> or
> 
> XPCOMUtils.defineLazyModuleGetter(this, 'Services',
>   'resource://gre/modules/Services.jsm');
> 
> -
> 
> The latter one may be resolved by some future ECMA proposals like:
>  - https://github.com/domenic/proposal-import-function
>  - https://github.com/benjamn/reify/blob/master/PROPOSAL.md
> 
> The former is a more tricky. I'm not sure how can we, within statement import 
> world annotate the difference.
> In the import-function world we could maybe do:
> 
> import('resource://gre/modules/Services.jsm', {singleton: true}).then();
> 
> but for static I don't see a semantically compatible way to annotate 
> singleton reference.

I *think* that we can get rid all instances of the former, but I also
think that it's a multi-year project to do it all across our code.

@zb, do you think that it would be possible to have a migration path
from jsm towards ES6 modules that would let us do it one module at a
time? Let's assume for the moment that we can rewrite `Cu.import` to
somehow expose ES6 modules as jsm modules.

Also, how would you envision add-ons (which could add some kind of
modules dynamically) in a world of ES6 modules?


3) The issue of loading the source code

All module systems need to load their source before proceeding. If I
understand correctly, ES6 modules rely upon the same network stack as
the rest of Gecko to load the source code, while jsm rely only upon the
much more limited nsIJar* and nsILocalFile.

Barring any mistake, some of our network stack is written in JS. @zb, do
you see any way to untangle this?


4) The issue of pausing C++

There is still the issue of C++ code calling JS code and expecting it to
return only once it has entirely loaded + . Currently, this is made
possible by `Cu.import` performing a blocking read on the source file.

It is my understanding that ES6 modules, being designed for the web,
don't expect any kind of sync I/O, and "just" block `onload`.
Transitioning to ES6 modules would undoubtedly require some hackery here
to pause the calling C++ code.



5) The issue of the backstage wrapper

Currently, a number of tests rely upon `Cu.import` to expose the
backstage pass wrapper, i.e.

let {privateSymbol} = Cu.import(...);
  // `privateSymbol` was not exported, but hey, it's still here.

Well, I *hope* it's just tests.

We would need a way to keep these tests working with ES6 modules.
Perhaps by requiring these tests to continue using a `Cu.import`
modified to work with ES6 modules.



That's all from the top of my head. At this stage, I suspect that the
best gain/effort ratio is migrating to RequireJS modules, but I'd be
happy to be proved wrong.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-26 Thread David Teller
I agree that my formulation was poor, but that's what I meant: in *a
single webpage*, all these modules behave the same wrt the underlying
objects.



On 26/09/16 18:14, Boris Zbarsky wrote:
> On 9/26/16 12:09 PM, David Teller wrote:
>> In web content, that's also the case with ES6 and require-style modules.
> 
> No, only on a per-global basis.
> 
> Put another way, if you Cu.import the same thing from two instances of
> browser.xul, you will get the same objects.  But if import the same ES7
> module from two different instances of the same webpage you get
> _different_ objects.
> 
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-26 Thread David Teller
In web content, that's also the case with ES6 and require-style modules.
I realize that it's a bit more complicated in chrome code, with all the
XUL + XBL + XPCOM + subscript loader, but I believe that we should be
able to reach the same result.

Cheers,
 David

On 26/09/16 18:01, Joshua Cranmer 🐧 wrote:
> On 9/24/2016 5:13 PM, David Teller wrote:
>> Which begs the question: what's the point of `Cu.import` these days?
> 
> One major difference between Cu.import and ES6/require-style modules is
> that only one version of the script is created with Cu.import. This
> allows you to make databases using Cu.import--every code that calls that
> Cu.import file, whether a chrome JS file or an XPCOM component
> implementation, will be guaranteed to see the same objects once the call
> is made. There are definitely modules that rely on this.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-26 Thread David Teller
Ideally, it would be great to replace our current messy module loading
with something stricter. I suspect, however, that we have subtleties
that won't let us proceed. Let me detail a bit some of the problems that
might occur if we wish to rewrite existing code with a stricter module
loader.



* Side-effects

For one thing, I remember that some of our JS code defers loading its
dependencies (typically, using `XPCOMUtils.lazyModuleGetter`) to make
sure that this specific module is loaded after some startup code has
been properly initialized.

I don't remember the specifics, but I recall having seen it in or around
Services.jsm. I also recall that it is necessary for some tests that
mockup XPCOM components, so we need to ensure that the XPCOM components
have time to be installed before the code that depends upon them
actually instantiates them.

I suspect that this hairy behavior is quite the opposite of what ES6
modules are for, and that this may make it impossible to use them in
this context.



* Blocking C++ while JS code is being loaded

It is pretty common for C++ code to call JS code – typically, without
knowing that it's JS, thanks to XPCOM/XPConnect, expecting it to be a
regular function/method call.

If executing this JS code means that we need to somehow load modules,
this means that the loading needs to block the caller.

Is this the case already?



Cheers,
 David

On 26/09/16 12:33, jcoppe...@mozilla.com wrote:
> On Sunday, 25 September 2016 07:32:32 UTC+1, David Teller  wrote:
>> What's the current status of the implementation of ES6 modules?
> 
> ES6 modules are supported for chrome code, but not yet for content (pending 
> spec related discussions that are not relevant for chrome).
> 
> It would be great if we could moving to using standard ES6 modules 
> internally!  If anyone is interested on working on converting the codebase 
> then I can help with this.
> 
> Can you explain the requirement for synchronous loading?  With ES6 modules 
> all imports are determined statically and are loaded before the script is 
> executed, and the spec does not currently provide an API to load a module, 
> synchronously or otherwise.
> 
> Jon
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-24 Thread David Teller
What's the current status of the implementation of ES6 modules? Also, to
use them in chrome code, can we support synchronous loading? Or would we
need to make the rewrite more complicated to support asynchronous loading?

On 25/09/16 02:35, Bill McCloskey wrote:
> If we're going to do a mass conversion, shouldn't we try to move to ES
> modules? There's some support for them in SpiderMonkey for chrome code,
> and it would be great to move towards a future standard.
> 
> -Bill
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


So, what's the point of Cu.import, these days?

2016-09-24 Thread David Teller
Once again, there have been discussions on the feasibility of adding
static analysis to our JS code, possibly as part of MozReview. As usual,
one of the main problems is that we are not using standard JS, so we
pretty much cannot use standard tools.

One of the main differences between mozilla-central JS and standard JS
is our module system. We use `Components.utils.import`, while the rest
of the world is using `require`-style modules. If we could get rid of
`Cu.import`, we would be a very large step closer towards standard JS.

Which begs the question: what's the point of `Cu.import` these days?

Yes, I'm aware that it isolates code in separate compartments, and that
there is a benefit to isolating add-on code from platform code. However,
it is pretty unclear to me that there is any benefit in separating
compartments inside mozilla-central, rather than, say, relying upon
static analysis and/or reviews to ensure that nobody modifies
`Object.prototype` in funky ways.

If we decide to abandon the guarantees provided by compartments to
isolate mozilla-central modules from each other, it's not hard to imagine:
- semi-automated rewrites that could convert mozilla-central code to
RequireJS-style modules, all sharing a single compartment (per process);
- a backwards compatible, compartment-isolating implementation of
`Cu.import` for the sake of add-ons.

There would also be side-benefits in terms of memory usage, which is
always good to have.

So, can anybody think of good reason to not do this?

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-05-02 Thread David Teller
Whatever you do, please, please, please make sure that everything is 
worker-friendly.
If we can't write (or at least read) contents to that Key-Value store from a 
worker, we will need to reimplement everything in a few months.

Cheers,
 David

- Original Message -
From: "Gregory Szorc" 
To: "Lawrence Mandel" 
Cc: "David Rajchenbach-Teller" , "Taras Glek" 
, "dev-platform" 
Sent: Friday, May 3, 2013 1:36:15 AM
Subject: Re: Storage in Gecko

On 5/2/2013 4:13 PM, Lawrence Mandel wrote:
>
> - Original Message -
>> Great post, Taras!
>>
>> Per IRC conversations, we'd like to move subsequent discussion of
>> actions into a meeting so we can more quickly arrive at a resolution.
>>
>> Please meet in Gregory Szorc's Vidyo Room at 1400 PDT Tuesday, April
>> 30.
>> That's 2200 UTC. Apologies to the European and east coast crowds. If
>> you'll miss it because it's too late, let me know and I'll consider
>> moving it.
>>
>> https://v.mozilla.com/flex.html?roomdirect.html&key=yJWrGKmbSi6S
> Did someone post a summary of this meeting? Is there a link to share?

Notes at https://etherpad.mozilla.org/storage-in-gecko

We seemed to converge on a (presumably C++-based) storage service that 
has named branches/buckets with specific consistency, flushing, etc 
guarantees. Clients would obtain a handle on a "branch," and perform 
basic I/O operations, including transactions. Branches could be created 
ad-hoc at run-time. So add-ons could obtain their own storage namespace 
with the storage guarantees of their choosing. Under the hood storage 
would be isolated so failures in one component wouldn't affect everybody.

We didn't have enough time to get into prototyping or figuring out who 
would implement it.

Going forward, I'm not sure who should own this initiative on a 
technical level. In classical Mozilla fashion the person who brings it 
up is responsible. That would be me. However, I haven't written a single 
line of C++ for Firefox and I have serious doubts I'd be effective. 
Perhaps we should talk about it at the next Platform meeting.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform