Re: Proposed W3C Charter: WebAssembly Working Group

2020-01-29 Thread Luke Wagner
I've been involved in the re-drafting of this charter as part of the
regular wasm WG meetings and I think we should support it.

Cheers,
Luke

On Tue, Jan 21, 2020 at 11:10 AM L. David Baron  wrote:

> The W3C is proposing a revised charter for:
>
>   WebAssembly Working Group
>   https://www.w3.org/2020/01/wasm-wg-charter-2020-proposed.html
>   https://lists.w3.org/Archives/Public/public-new-work/2020Jan/0003.html
>
> The differences from the previous charter are:
>
> https://services.w3.org/htmldiff?doc1=https%3A%2F%2Fwww.w3.org%2F2017%2F08%2Fwasm-charter=https%3A%2F%2Fwww.w3.org%2F2020%2F01%2Fwasm-wg-charter-2020-proposed.html
>
> Mozilla has the opportunity to send comments or objections through
> Thursday, February 13.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.  (We should probably say something, even if
> it's just support, given our involvement.)
>
> -David
>
> --
> 턞   L. David Baronhttps://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Dynamic module imports (JS 'import()' syntax)

2018-10-20 Thread Luke Wagner
Since dynamic import is a core part of the JS language, with dedicated
syntax (`import` is not a plain function, but a fixed syntactic form), it
would seem to fall under the new, underlined JS exception in the first
section of
https://blog.mozilla.org/security/2018/01/15/secure-contexts-everywhere.

On Sat, Oct 20, 2018 at 3:54 AM L. David Baron  wrote:

> On Thursday 2018-10-18 07:45 -0700, Jon Coppeard wrote:
> > Do other browser engines implement this?
> >
> > Chrome shipped this in 63, Safari in 11.1, and it's in development in
> Edge.
> >
> > web-platform-tests:
> >
> >
> https://github.com/web-platform-tests/wpt/tree/master/html/semantics/scripting-1/the-script-element/module/dynamic-import
> >
> > Is this feature restricted to secure contexts?
> >
> > No, it's not restricted as it's part of script execution and works
> everywhere that is allowed.
>
> Part of the idea behind restricting things to secure contexts is
> that we will have to add secure context tests to areas where they
> aren't currently present.  So being part of script execution doesn't
> seem to be to be an adequate reason for not restricting.
>
> Have both Chrome and Safari shipped it without a secure context
> restriction?  (If not, we should probably restrict.  If so... it's
> perhaps a more interesting question -- maybe we shouldn't, or maybe
> it should depend on current usage levels.)
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust and --enable-shared-js

2018-10-02 Thread Luke Wagner
(Sorry, I polled #jsapi about this issue back when you first posted and
then forgot to reply with the response.)

It doesn't seem like any SM devs use --enable-shared-js for their own
development but we do know that various embedders (e.g. GNOME) use the JS
shared library and so we'd like to keep that configuration tested and
working.  One hybrid option is thus:
 - drop support for building Gecko with --enable-shared-js (avoiding the
symbol conflict issue), but keep --enable-shared-js as a configure option
for JS shell builds
 - have at least one tier-1 --enable-shared-js JS shell build on automation
so that we at least keep it working

Cheers,
Luke


On Tue, Oct 2, 2018 at 7:25 AM Henri Sivonen  wrote:

> On Mon, Sep 24, 2018 at 3:24 PM, Boris Zbarsky  wrote:
> > On 9/24/18 4:04 AM, Henri Sivonen wrote:
> >>
> >> How important is --enable-shared-js? I gather its use case is making
> >> builds faster for SpiderMonkey developers.
> >
> >
> > My use case for it is to be able to use the "exclude samples from
> library X"
> > or "collapse library X" tools in profilers (like Instruments) to more
> easily
> > break down profiles into "page JS" and "Gecko things".
>
> OK.
>
> On Mon, Sep 24, 2018 at 1:24 PM, Mike Hommey  wrote:
> >> How important is --enable-shared-js? I gather its use case is making
> >> builds faster for SpiderMonkey developers. Is that the only use case?
> >
> > for _Gecko_ developers.
>
> This surprises me. Doesn't the build system take care of not
> rebuilding SpiderMonkey if it hasn't been edited? Is this only about
> the link time?
>
> What's the conclusion regarding next steps? Should I introduce
> js_-prefixed copies of the four Rust FFI functions that I want to make
> available to SpiderMonkey?
>
> --
> Henri Sivonen
> hsivo...@mozilla.com
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CPU core count game!

2018-04-05 Thread Luke Wagner
fbertsch helpfully wrote a query that breaks down physical cores into the %
with and without HT enabled:
  https://sql.telemetry.mozilla.org/queries/47219/source
>From this we can see that, e.g., 6.7% of systems that report "2 logical
cores" (and ~2% of all systems) actually only have 1 physical core with 2
hyperthreads.  This seemed like the worst case for heuristics that solely
talk about logical threads (which, with only 1 exception that I can see
[1], seems like most of our heuristics).

That SQL query came out of a more general request to report logical and
physical info in the dashboard:
  https://github.com/mozilla/firefox-hardware-report/issues/60
If enough people are regularly interested in this data, it'd be good to
bump the priority of that issue.

Cheers,
Luke


[1]
https://searchfox.org/mozilla-central/source/gfx/layers/PaintThread.cpp#132


On Wed, Mar 28, 2018 at 2:03 PM, Steve Fink  wrote:

> Yes, sorry, a couple of people pointed that out to me privately. And I did
> get that mixed up; I was assuming processors, despite the page specifically
> pointing out "physical cores".
>
> I still think there's something to be kept in mind here, though. Even with
> 4 processors (2 hyperthreaded cores or whatever), it's never correct to
> assume that running something on a different thread is a gold bullet for
> performance problems. I'm all for increasing the concurrency of our code as
> long as we ensure that it doesn't hurt in the case of low levels of actual
> parallelism.
>
> What that means in practice, I'm not entirely sure, but it does seem like
> we should be more conscious about thread priorities and global thread pool
> management. Also, lock contention is a real thing. It has been coming up
> here and there and wiping out parallelization gains.
>
>
> On 3/28/18 10:27 AM, Ben Kelly wrote:
>
>> That page says "physical cores", so its not taking into account hyper
>> threading, right?  So even a high end macbook pro falls in that category?
>>
>> On Tue, Mar 27, 2018 at 5:02 PM, Mike Conley > > wrote:
>>
>> Thanks for drawing attention to this, sfink.
>>
>> This is likely to become more important as we continue to scale up our
>> parallelization with content processes and threads.
>>
>> On 21 March 2018 at 14:54, Steve Fink > > wrote:
>>
>> > Just to drive home a point, let's play a game.
>> >
>> > First, guesstimate what percentage of our users have systems
>> with 2 or
>> > fewer cores.
>> >
>> > Then visit
>> https://hardware.metrics.mozilla.com/#goto-cpu-and-memory
>>  to
>> > check your guess.
>> >
>> > (I didn't say it was a *fun* game.)
>> >
>> >
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> 
>> > https://lists.mozilla.org/listinfo/dev-platform
>> 
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org > >
>> https://lists.mozilla.org/listinfo/dev-platform
>> 
>>
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: SharedArrayBuffer and Atomics will ride the trains behind a pref

2016-01-14 Thread Luke Wagner
For additional rationale, you might be interested to read:
  
https://blog.mozilla.org/javascript/2015/02/26/the-path-to-parallel-javascript/

On Thu, Jan 14, 2016 at 8:11 AM, Thomas Zimmermann
 wrote:
> Thanks!
>
> Am 14.01.2016 um 15:08 schrieb Lars Hansen:
>> On Thu, Jan 14, 2016 at 2:49 PM, Thomas Zimmermann 
>> wrote:
>>
>>> Hi,
>>>
>>> I saw the lightning talk you gave in Orlando in this topic. I was
>>> wondering if you considered implementing Transactional Memory for
>>> SharedArrayBuffer.
>>
>> I have not (or, not in earnest).
>>
>>
>>
>>> JS seems like the perfect environment for TM. Are
>>> there reasons for 'only' providing atomic ops? Just asking out of
>>> curiosity...
>>>
>> The use cases that drive this work are access to multicore performance
>> from JS as well as asm.js as a compilation target for conventional
>> multithreaded C++; actually the asm.js case is the more important one at
>> this time.  Hence the focus for this first version of the spec has been on
>> (very) low level mechanisms that can serve those use cases in
>> straightforward ways.
>>
>> Personally I'd like to see us add additional higher-level mechanisms that
>> are a better fit for straight JS programming.  I'm hoping that we can use
>> the current low level mechanisms to prototype higher level ones, and
>> eventually standardize some of them.  I don't know how well we can
>> prototype TM like that - but it's early days still.
>>
>> --lars
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>
>>> Am 14.01.2016 um 14:16 schrieb Lars Hansen:
 Until now the new SharedArrayBuffer constructor and the new Atomics
>>> global
 object [1] have been enabled on Nightly only.  Starting with Firefox 46,
 those bindings will still be enabled by default on Nightly but they will
 also be available on Aurora, DevEd, Beta, and Release by flipping the
>>> value
 of javascript.options.shared_memory to true in about:config.

 --lars

 [1] http://lars-t-hansen.github.io/ecmascript_sharedmem/shmem.html
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
>>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal: revisit and implement navigator.hardwareConcurrency

2015-09-08 Thread Luke Wagner
Since the original m.d.p thread on hardwareConcurrency last year:
  https://groups.google.com/d/topic/mozilla.dev.platform/QnhfUVw9jCI/discussion
the landscape has shifted (as always) and I think we should reevaluate
and implement this feature.

What hasn't changed are the arguments, made in the original thread,
that hardwareConcurrency is a clumsy way to decide how many workers to
create and can lead to the creation of too many workers in various
scenarios (e.g., multiple tabs all attempting to saturate all the cores,
cores vs. hyperthreads).

What has changed is the appearance of more compelling use cases.  In
particular, the upcoming support for SharedArrayBuffer [1][2] allows
Emscripten to compile pthreads [3] applications, which has been the #1
compile-to-web feature request over the last few years. Specifically, native
game engines find the number of logical cores on the machine (using APIs
present in C++11, etc.), and use a number of threads based on that (often
adjusted, and they have a lot of experience tuning this). They would like to
do the same on the web, and Chrome and Safari already let them. In the
absence of hardwareConcurrency, developers are forced to resort to either
hardcoding a constant number of workers or using a polyfill library [4] that
estimates the number of cores. Unfortunately, the polyfill takes a few
seconds (hurting startup time) and produces inaccurate results (based on
evaluations from multiple parties) [5]. Thus, while hardwareConcurrency
isn't ideal, it's strictly better than what developers have now in Firefox.

Moreover, I don't think the applicability of hardwareConcurrency is
limited to compile-to-web uses.  I think all the use cases we're
seeing now from compiled native apps will manifest in JS apps further
down the line as worker usage becomes more commonplace and
applications grow more demanding.  As in many other cases, I think
games are serving as a catalyst here, proving what's possible and
paving the way for fleets of non-game applications.

But will the existence of hardwareConcurrency encourage bad behavior
in every-day web browsing?  I don't think so.  First of all,
hardwareConcurrency is meant to help good actors who want to
ensure a good experience for their users.  Bad actors can already
saturate all your cores with Workers. Thus, as Worker (mis)use
becomes more widespread on the Web, it seems inevitable we'll need
to do some form of Worker throttling (via thread priority or
SuspendThread/pthread_kill) of background/invisible windows *anyway*;
it seems like the only reason we haven't had to do this already is
because Workers just aren't used that much in normal web apps.  For
good actors, though, it is possible to mitigate some of the clumsiness
of hardwareConcurrency: using SharedWorkers to detect the "same
app open in many tabs" case; using the PageVisibility API to pause
work when not visible (which will likely happen anyway in frame-driven
applications based on requestAnimationFrame throttling of background
tabs).  Lastly, for neither good nor bad actors, I think the hazard of
casual/widespread use is more limited by the hurdles of using workers
at all (w/ or w/o SharedArrayBuffer).

Will we get stuck with hardwareConcurrency forever?  I don't think
so.  Farther down the line, as more web apps take advantage of workers
and we find real examples of CPU contention for which throttling
mitigations aren't sufficient, we will be motivated to improve and
propose a more responsive API.  However, I don't think we can design
that API now: we don't have the use cases to evaluate the API against.
This is the basic Web evolutionary strategy.

On the subject of fingerprinting: as stated above, core count can
already be roughly measured [4].  While the extra precision and speed
of hardwareConcurrency does make fingerprinting somewhat easier, as
we've done with other features, we need to weigh the value to users
against information revealed.  In this case, it seems like the ratio
is pretty heavily weighted toward the value.

On a more technical detail: WebKit and Chromium have both shipped,
returning the number of logical processors where WebKit additionally
clamps to 2 (on iOS) or 8 (otherwise) [6] which is explicitly allowed
by WHATWG text [7].  I would argue for not clamping (like Chrome),
although I do think we'll have a good amount of flexibility to change
clamping over time based on experience.

How does that sound?

Cheers,
Luke

[1] 
https://blog.mozilla.org/javascript/2015/02/26/the-path-to-parallel-javascript/
[2] https://github.com/lars-t-hansen/ecmascript_sharedmem
[3] 
https://groups.google.com/forum/#!msg/emscripten-discuss/gQQRjajQ6iY/DcYQpQyPQmIJ
[4] http://wg.oftn.org/projects/core-estimator/demo/
[5] https://bugs.webkit.org/show_bug.cgi?id=132588#c86
[6] https://trac.webkit.org/browser/trunk/Source/WebCore/page/Navigator.cpp#L137
[7] https://wiki.whatwg.org/wiki/Navigator_HW_Concurrency
___

Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Luke Wagner
If we do unify Gecko/SpiderMonkey styles (something it seems like we're
moving towards and I think would be great), it would be a real shame to
switch 'cx' (a parameter to basically every function in SpiderMonkey) to
'aCx'; that would really make some eyes bleed.  One compromise could be to
drop the 'a'-prefix requirement for 1- or 2-length parameter names, since
this is when it really looks silly.  (But I'd prefer to drop the 'a' prefix
altogether.)

On Tue, Jul 7, 2015 at 7:38 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 7/7/15 11:49 AM, Mike Conley wrote:

 I suspect that knowing what things were passed into a method or function
 is something that can be divined via static analysis.

 Aren't there tools for our (admittedly varied) editors / IDEs


 And debuggers.  And dxr and blame views?


 -Boris
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The War on Warnings

2015-06-04 Thread Luke Wagner
In addition to judging noisiness by volume over a whole test run, can we
also include any warning that happens on normal browser startup, new tab,
and other vanilla browser operations?  This has always annoyed me.

On Thu, Jun 4, 2015 at 3:33 PM, Bobby Holley bobbyhol...@gmail.com wrote:

 On Thu, Jun 4, 2015 at 1:18 PM, smaug sm...@welho.com wrote:

  More likely we need to change a small number of noisy NS_ENSURE_* macro
  users to use something else,
  and keep most of the NS_ENSURE_* usage as it is.
 

 +1.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Luke Wagner
It definitely makes sense to start your performance investigation with
processCount=1 since that will likely highlight the low-hanging fruit which
should be fixed regardless of processCount.

My question is: after a decent period of time picking the low-hanging
fruit, if there is still non-trivial spinner time for processCount=1, would
the team consider shifting efforts to getting processCount1 ship-worthy
instead of resorting to heroics to get processCount=1 ship-worthy?

On Tue, May 5, 2015 at 10:41 AM, Mike Conley mcon...@mozilla.com wrote:

  Is there a more detailed description of what the issues with multiple
 content processes are that e10s itself doesn't suffer from?

 I'm interpreting this as, What are the problems with multiple content
 processes that single process does not have, from the user's perspective?

 This is mostly unknown, simply because dom.ipc.processCount  1 is not
 well tested. Most (if not all) e10s tests test a single content process.
 As a team, when a bug is filed and we see that it's only reproducible
 with dom.ipc.processCount  1, the priority immediately drops, because
 we're just not focusing on it.

 So the issues with dom.ipc.processCount are mostly unknown - although a
 few have been filed:

 https://bugzilla.mozilla.org/buglist.cgi?quicksearch=processCountlist_id=12230722

  One of the extremely common cases where I used to get the spinner was
 when my system was under load (outside of Firefox.)  I doubt that we're
 going to be able to fix anything in our code to prevent showing the
 spinner in such circumstances.

 Yes, I experience that too - I often see the spinner when I have many
 tabs open and I'm doing a build in the background.

 I think it's worth diving in here and investigating what is occurring in
 that case. I have my suspicions that
 https://bugzilla.mozilla.org/show_bug.cgi?id=1161166 is a big culprit on
 OS X, but have nothing but profiles to back that up. My own experience
 is that the spinner is far more prevalent in the many-tabs-and-build
 case on OS X than on other platforms, which makes me suspect that we're
 just doing something wrong somewhere - with bug 1161166 being my top
 suspect.

  Another such common case would be one CPU hungry tab.

 I think this falls more under the domain of UX. With single-process
 Firefox, the whole browser locks up, and we (usually) show a modal
 dialog asking the user if they want to stop the script. In other cases,
 we just jank and fail to draw frames until the process is ready.

 With a content process, the UI remains responsive, but we get this
 bigass spinner. That's not an amazing trade-off - it's much uglier and
 louder, IMO, than the whole browser locking up. The big spinner was just
 an animation that we tossed in so that it was clear that a frame was not
 ready (and to avoid just painting random video memory), but I should
 emphasize that it was never meant to ship.

 If we ship the current appearance of the spinner to our release
 population... it would mean that my heels have been ground down to nubs,
 because I will fight tooth and nail to prevent that from happening. I
 suspect UX feels the same.

 So for the case where the content process is being blocked by heavy
 content, we might need to find better techniques to communicate to the
 user what's going on and to give them options. I suspect / hope that bug
 1106527 will carry that work.

 Here's what I know:

 1) Folks are reporting that they _never_ see the spinner when they crank
 up dom.ipc.processCount  1. This is true even when they're doing lots
 of work in the background, like building.

 Here's what I suspect:

 1) I suspect that given the same group of CPU heavy tabs, single-process
 Firefox will currently perform better than e10s with a single content
 process. I suspect we can reach parity here.

 2) I suspect that OS X is where most of the pain is, and I suspect bug
 1161166 is a big part of it.

 Here's what I suggest:

 1) Wait for Telemetry data to come in to get a better sense of who is
 being affected and what conditions they are under. Hopefully, the
 population who have dom.ipc.processCount  1 are small enough that we
 have useful data for the dom.ipc.processCount = 1 case.

 2) Send me profiles for when you see it.

 3) Be patient as we figure out what is slow and iron it out. Realize
 that we're just starting to look at performance, as we've been focusing
 on stability and making browser features work up until now.

 4) Trust that we're not going to ship The Spinner Experience, because
 shipping it as-is is beyond ill-advised. :D

 -Mike

 On 05/05/2015 10:49 AM, Ehsan Akhgari wrote:
  On 2015-05-05 10:30 AM, Mike Conley wrote:
  The e10s team is currently only focused on getting things to work with a
  single content process at this time. We eventually want to work with
  multiple content processes (as others have pointed out, the exact number
  to work with is not clear), but we're focused on working with a single
  process 

Re: The e10s throbber

2015-04-07 Thread Luke Wagner

  I think we probably want to use a longer delay than 300ms before we show
  the spinner. We'd also like to look into why it takes so long to
 re-create
  the layer tree when we switch to a tab. Sometimes it's caused by a janky
  content process, but there may be some layout/gfx improvements we could
  make too.
 

 I've been running with dom.ipc.processCount set to 10 for the last two
 months or so. Before, I saw the throbber a lot, but with 10 content
 processes, I can't remember seeing it at all recently. That's probably not
 saying much about the reasons for seeing it with a single content process,
 though.


I have the same experience.  (On a side note, glitches aside, the
experience with dom.ipc.processCount1 is *incredibly* less janky,
something I only fully appreciated after using it for a while and then
running normal non-e10s FF.  I expect this is also the experience Chrome
users have when trying FF.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The browser should cache compiled javascript code while caching html pages

2014-10-17 Thread Luke Wagner
I have a short summary of why caching JIT code is not necessarily a clear win 
for most JS in a blog post:
  
http://blog.mozilla.org/luke/2014/01/14/asm-js-aot-compilation-and-startup-performance/#caching
We do machine code for asm.js, though (as also described in the post).

More interesting than caching machine code is caching other bits of data that 
offer a lot more win per byte:
 - function boundaries and whether there were any SyntaErrors (so we don't have 
to do the initial syntax-only parse)
 - bytecode for the top-level script and definitely-run functions (usually this 
stuff is pretty cold, so bytecode is as far as it ever gets)
 - for the functions that do get jitted: which ones, what types were observed, 
etc, so we can expedite the normal warm-up and recompilation process

This involves attaching blobs of stuff the JS engine wants back next time to 
network cache entries and a whole new path from path from Necko through Gecko 
to SpiderMonkey, so it's not exactly a small project :)  We've actually done 
some initial work in this direction (motivated by b2g app performance):
  https://bugzilla.mozilla.org/show_bug.cgi?id=900784
but it seems to be on hold atm.  I hope it resumes before long.

Cheers,
Luke

- Original Message -
 Since the html pages are already cached, why not also cache the JIT
 compiled javascript while leaving a page? Shouldn't use too much space
 than the text content of the embedding page. Much less space than the
 image files embedded in a page.
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


proposal to use JS GC Handle/Rooted typedefs everywhere

2013-09-18 Thread Luke Wagner
To save typing, the JS engine has typedefs like
  typedef HandleJSObject* HandleObject;
  typedef RootedJS::Value RootedValue;
and the official style is to prefer the HandleX/RootedX typedefs when there is 
no need to use the HandleX/RootedX template-ids directly.

This issue was discussed off and on in the JS engine for months, leading to a 
m.d.t.js-engine.internals newsgroup thread 
(https://groups.google.com/forum/#!topic/mozilla.dev.tech.js-engine.internals/meWx5yxofYw)
 where it was discussed more (the high occurrence of Handle/Rooted in the JS 
engine combined with the relatively insignificant difference between the two 
syntactic forms making a perfect bike shed storm).

Given that the JS engine has the official style of use the typedefs, it seems 
like a shame for Gecko to use a different style; while the difference may be 
insignificant, we do strive for consistency.  So, can we agree to use the 
typedefs all over Gecko?  From the m.d.t.js-engine.internals thread I think 
bholley of the kingdom of XPConnect is strongly in favor.

(Again, this doesn't have to be an absolute rule, the needs of meta-programming 
and code-generators can override.)

Cheers,
Luke
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform