Linux App Summit - Nov 12-15, Barcelona

2019-09-11 Thread Jim Blandy
Hi! A friend of mine is helping organize the 2019 Linux App Summit (
https://linuxappsummit.org/), so I wanted to make sure Mozilla devs had
heard about it. (Barcelona is also hosting RustFest.eu from Nov 9-12!)

Here's what LAS has to say for itself:

---

Applications are the foundation of the user experience and can be
appreciated in all types of Linux environments. It is this reason that
building a common app ecosystem is a valuable goal. At the Linux
Application Summit (LAS), we will collaborate on all aspects aimed at
accelerating the growth of the Linux application ecosystem.

At LAS you can attend talks, panels, and Q on a wide range of topics
covering everything. From creating, packaging, and distributing apps, to
monetization within the Linux ecosystem, designing beautiful applications,
and more - all delivered by the top experts in each field. You will acquire
insights into how to reach users, build a community around your app, what
toolkits and technologies make development easier, which platforms to aim
for, and much more.

LAS welcomes application developers, designers, product managers, user
experience specialists, community leaders, academics, and anyone who is
interested in the state of Linux application design and development!

With that in mind, the topics we are interested in are:

   - Creating, packaging, and distributing applications
   - Design and usability
   - Commercialization
   - Community / Legal
   - Platform
   - Linux App Ecosystem
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Slides from Orlando platform memory tools meeting

2018-12-10 Thread Jim Blandy
Hi, everybody. I've asked the folks who presented at the platform memory
tools meeting in Orlando last Thursday over in the Swan::Ibis meeting room
to add a link to their slides or notes to this Google sheet
.
(I sent that email just now, so the links are probably not up yet, but I'm
sure if you check back in a little bit, they will be.)

If you attended, I hope you found it helpful.

Thanks very much to Nick, Andrew, and Kris for taking the time to put
together their presentations.

Google sheet linking to presentations:
https://docs.google.com/spreadsheets/d/1x8CZPdzPWatFW31Bp_8_I1Mp4RWFK2r9rbo_ps_UQ20/edit?usp=sharing
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How do I file a bug?

2018-10-08 Thread Jim Blandy
Below is the text from the code of conduct. The CoC does say to be
positive, but it is also at pains to emphasize the importance of being able
to speak directly, being open to being wrong, and valuing others' input.
I read frustration being expressed in this thread, but not disrespect. The
original criticism seems on-point, and there are many specifics provided.
The obligation to criticize constructively does not, I think, extend to an
obligation to "come up with a solution". A bug report without a patch
should still be welcome, if it's accurate and actionable.
The original question does suggest being unsure where to even report the
problem, but at this point that has been cleared up. I agree with Jared
that further discussion belongs where the people who can do something about
the problem can see it.

Be Respectful
>
> Value each other’s ideas, styles and viewpoints. We may not always agree,
> but disagreement is no excuse for poor manners. Be open to different
> possibilities and to being wrong. Be kind in all interactions and
> communications, especially when debating the merits of different options.
> Be aware of your impact and how intense interactions may be affecting
> people. Be direct, constructive and positive. Take responsibility for your
> impact and your mistakes – if someone says they have been harmed through
> your words or actions, listen carefully, apologize sincerely, and correct
> the behavior going forward.
> Be Direct but Professional
>
> We are likely to have some discussions about if and when criticism is
> respectful and when it’s not. We *must* be able to speak directly when we
> disagree and when we think we need to improve. We cannot withhold hard
> truths. Doing so respectfully is hard, doing so when others don’t seem to
> be listening is harder, and hearing such comments when one is the recipient
> can be even harder still. We need to be honest and direct, as well as
> respectful.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-20 Thread Jim Blandy
Reading between the lines, it seems like the committee's aim is to take
something that is widely understood and used, broadly capable, and in the
big picture relatively well-defined (i.e. the Web), and incorporate it into
the C++ standard by reference.

The problem is that the *relationship of web content to surrounding native
app code* is none of those things, and I think you could make a case that
it's been undergoing violent churn for years and years.


On Fri, Jul 20, 2018 at 10:04 AM, Botond Ballo  wrote:

> On Thu, Jul 19, 2018 at 5:35 PM, Mike Hommey  wrote:
> > Other than everything that has already been said in this thread,
> > something bugs me with this proposal: a web view is a very UI thing.
> > And I don't think there's any proposal to add more basic UI elements
> > to the standard library.
>
> Not that I'm aware of.
>
> > So even if a web view is a desirable thing in
> > the long term (and I'm not saying it is!), there are way more things
> > that should come first.
>
> I think the idea behind this proposal is that standardizing a UI
> framework for C++ would be too difficult (seeing as we couldn't even
> agree on a 2D graphics proposal, which is an ingredient in a UI
> framework), so the web view fills the role of the UI framework: your
> UI is built inside the web view. (Not saying that's a good idea or a
> bad idea, just trying to explain the line of thinking.)
>
> Cheers,
> Botond
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla-central now compiles with C++14

2017-11-16 Thread Jim Blandy
Oh, this is great!! I was going to have to use horrible kludges to get
around the lack of generic lambdas.

On Wed, Nov 15, 2017 at 8:44 PM, Nathan Froyd  wrote:

> C++14 constructs are now usable in mozilla-central and related trees.
> According to:
>
> https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code
>
> this opens up the following features for use:
>
> * binary literals (0b001)
> * return type deduction
> * generic lambdas
> * initialized lambda captures
> * digit separators in numeric constants
> * [[deprecated]] attribute
>
> My personal feeling is that all of these features minus return type
> deduction seem pretty reasonable to use immediately, but I would
> welcome comments to the contrary.
>
> Please note that our minimum GCC version remains at 4.9: I have seen
> reports that GCC 4.9 might not always be as adept at compiling C++14
> constructs as one might like, so you may want to be a little cautious
> and use try to make sure GCC 4.9 does the right thing.
>
> Starting the race to lobby for C++17 support in three...two...one... =D
>
> Happy hacking,
> -Nathan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-30 Thread Jim Blandy
Okay, this is half the argument. The second half would be:

- Does auto cause such mistakes more often than it prevents them? The
benefit claimed for auto is that it usually makes code more legible.
Hopefully that prevents mistakes, on the balance.

- Is ranged-for more prone to iterator invalidation errors than the older
form? I believe I've seen .Length() calls hoisted out of old-form loop
conditions pretty frequently. The advantage of ranged-for is claimed to be
that it depends on the operand's iteration API, instead of requiring the
programmer to invent an iteration technique each time they write a loop.

- Are closures more prone to ownership mistakes than the pre-closure
technique? How does this compare with their benefits to legibility?

When evaluating the impact of new features, we should not let the
familiarity of the mistakes we've been making in C++98 for twenty years
cause us to focus only on the risks from change. That misjudgment would
hurt the quality of the code.



On Mon, Oct 30, 2017 at 8:03 AM, smaug  wrote:

> On 10/30/2017 04:52 PM, Simon Sapin wrote:
>
>> On 30/10/17 15:05, smaug wrote:
>>
>>> And let's be careful with the new C++ features, pretty please. We
>>> managed to not be careful when we started to use auto, or ranged-for
>>> or lambdas. I'd prefer to not fix more security critical bugs or
>>> memory leaks just because of fancy hip and cool language features ;)
>>>
>>
>> Careful how? How do new language features lead to security bugs? Is new
>> compiler code not as well tested and could have miscompiles? Are specific
>> features easy to misuse?
>>
>>
>
> With auto we've managed to hide the ownership of some objects from
> reader/reviewer (and I guess also from the patch author),
> and this has lead to both to security issues and memory leaks.
>
> Ranged-for lead to security critical crashes when we converted some old
> style
> for (i = 0; i < array.Length(); ++i) to use it, since ranged-for doesn't
> play well when the array changes underneath you.
> These days we crash safely there.
>
> With lambdas understanding who owns what becomes harder, and before some
> checks, we had (I think rather short while) issues when
> there was a raw pointer to a refcounted object captured in a lambda and
> the lambda was then dispatched to the event loop.
> Nothing guaranteed the captured object to stay alive.
>
> Basically, some "new" features have hidden important aspects of the
> lifetime management of objects, and by
> doing that, made it easier to write broken code and harder by the
> reviewers to catch the mistakes.
>
>
>
> -Olli
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fennec now builds with clang instead of gcc

2017-10-29 Thread Jim Blandy
As I said in the other thread, I'm eager for generic lambdas. They would
let me avoid trying to resurrect mfbt/Function.h and adding support for
allocation policies! So: quite eager.

On Sun, Oct 29, 2017 at 6:19 PM, Makoto Kato 
wrote:

> Great!  BTW, is minimal requirement of NDK version changed to NDK r15c
> that is used on taskcluster job?  Or does it keeps NDK r11c?
>
> -- Makoto
>
>
> On Mon, Oct 30, 2017 at 8:15 AM, Nathan Froyd  wrote:
> > Hi all,
> >
> > Bug 1163171 has been merged to mozilla-central, moving our Android
> > builds over to using clang instead of GCC.  Google has indicated that
> > the next major NDK release will render GCC unsupported (no bugfixes
> > will be provided), and that it will be removed entirely in the near
> > future.  Switching to clang now makes future NDK upgrades easier,
> > provides for better integration with the Android development tools,
> > and brings improvements in performance/code size/standards support.
> >
> > For non-Android platforms, the good news here is that compiling Fennec
> > with clang was the last major blocker for turning on C++14 support.
> > Using clang on Android also opens up the possibility of running our
> > static analyses on Android.
> >
> > If you run into issues, please file bugs blocking bug 1163171.
> >
> > Thanks,
> > -Nathan
> > ___
> > mobile-firefox-dev mailing list
> > mobile-firefox-...@mozilla.org
> > https://mail.mozilla.org/listinfo/mobile-firefox-dev
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-29 Thread Jim Blandy
How will this affect the matrix of specific C++ features we can use?
https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code

(At the moment I'm dying for generic lambdas, which are C++14. I'd been
using std::function as a workaround, but I also need control over the
allocation policy, which std::function no longer offers.)


On Wed, Oct 25, 2017 at 2:48 PM, David Major  wrote:

> I'm planning to move production Windows builds to VS2017 (15.4.1) in bug
> 1408789.
>
> VS2017 has optimizer improvements that produce faster code. I've seen 3-6%
> improvement on Speedometer. There is also increased support for C++14 and
> C++17 language features:
> https://docs.microsoft.com/en-us/cpp/visual-cpp-language-conformance
>
> These days we tend not to support older VS for too long, so after some
> transition period you can probably expect that VS2017 will be required to
> build locally, ifdefs can be removed, etc. VS2017 Community Edition is a
> free download and it can coexist with previous compilers. Installation
> instructions are at:
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_
> guide/Build_Instructions/Windows_Prerequisites#Visual_Studio_2017
>
> If you have concerns, please talk to me or visit the bug. Thanks!
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-09-05 Thread Jim Blandy
As an offer of help, from a group whose charter covers this work, that's
very welcome. I felt that I was being shepherded into something on behalf
of others for whom I cannot speak, which was uncomfortable!

For my own sake, I am disinclined to participate in a standardization
effort outside of the usual institutions. I think we could benefit from the
experience of people who have produced web standards before. I'm not
well-connected enough yet to anticipate what folks will say, but I'll
suggest the Browser Testing Tools WG when the opportunity comes up.


On Tue, Sep 5, 2017 at 5:32 AM, James Graham <ja...@hoppipolla.co.uk> wrote:

> On 04/09/17 23:34, Jim Blandy wrote:
>
>> On Mon, Sep 4, 2017 at 7:36 AM, David Burns <dbu...@mozilla.com> wrote:
>>
>>> I don't think anyone would disagree with the reasons for doing this. I,
>>>
>> like James who brought it up earlier, am concerned that we from the emails
>> appear to think that implementing the wire protocol would be sufficient to
>> making sure we have the same semantics.
>>
>> LOL, give us a little credit, okay? The authors of the email do not think
>> that. We want to have a properly written specification and conformance
>> tests. I think you're reading "we have no interest in established
>> standardization processes" when what we wrote was "the process is in very
>> early stages".
>>
>> Do you think the Browser Testing Tools WG is the right body to work on a
>> JS
>> debugging and console protocol, used by interactive developer tools? That
>> seems like a surprising choice to me.
>>
>
> It is certainly not the only possible venue, but if you want to do the
> work at the W3C then it's probably the easiest way to get things going from
> a Process point of view, since this kind of protocol would be in the
> general remit of the group, and the rechartering could add it specifically.
> Certainly the people currently in the group aren't the right ones to do the
> work, but adding new participants to work specifically on this would be
> trivial.
>
> Also - at least as far as I know -  this is not where the current
>> participants in the discussion (Kenneth Auchenberg or Christian Bromann,
>> to
>> name two) have been working. Is having a previously uninvolved standards
>> committee take up an area in which current activity is occurring elsewhere
>> considered friendly and cooperative behavior? It seems unfriendly to me. I
>> would like to avoid upsetting the people I'm hoping to work closely with.
>>
>
> I think you have misinterpreted the intent here. I don't think anyone is
> interested in doing a hostile takeover of existing work. But there is
> concern that the work actually happens. Pointing at remotedebug.org,
> which has been around since 2013 without producing any specification
> materials, isn't helping assuage my concerns, and I guess others are having
> a similar reaction. It is of course entirely possible that there's work
> going on that we can't see. But my interpretation of David's email is that
> he is trying to offer you options, not force you down a certain path. The
> W3C is not always the right venue to work in, but it is sometimes sought
> out by organisations who would likely participate in this work because of
> its relatively strong IPR policy.
>
> I should stress that irrespective of venue I would expect this
> standardisation effort to take years; people always underestimate the work
> and time required for standards work. It will certainly require us to
> commit resources to make it happen.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-09-04 Thread Jim Blandy
On Mon, Sep 4, 2017 at 7:36 AM, David Burns  wrote:
> I don't think anyone would disagree with the reasons for doing this. I,
like James who brought it up earlier, am concerned that we from the emails
appear to think that implementing the wire protocol would be sufficient to
making sure we have the same semantics.

LOL, give us a little credit, okay? The authors of the email do not think
that. We want to have a properly written specification and conformance
tests. I think you're reading "we have no interest in established
standardization processes" when what we wrote was "the process is in very
early stages".

Do you think the Browser Testing Tools WG is the right body to work on a JS
debugging and console protocol, used by interactive developer tools? That
seems like a surprising choice to me.

Also - at least as far as I know -  this is not where the current
participants in the discussion (Kenneth Auchenberg or Christian Bromann, to
name two) have been working. Is having a previously uninvolved standards
committee take up an area in which current activity is occurring elsewhere
considered friendly and cooperative behavior? It seems unfriendly to me. I
would like to avoid upsetting the people I'm hoping to work closely with.

I think the people who have actively participating in the work should be
the ones to decide which standards body to collaborate with.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-31 Thread Jim Blandy
Certain bits of the original post are getting more emphasis than I had
anticipated. Let me try to clarify why we in Devtools want this change or
something like it.

The primary goals here are not related to automation and testing. They are:

   - to allow Devtools to migrate the console and the JS debugger to the
   CDP;
   - to start a tools server that can be shared between Gecko and Servo;
   - to replace Gecko's devtools server, implemented in JS, with one
   implemented in Rust, to reduce memory consumption and introduce less noise
   into performance and memory measurements



and to help us share code with Servo. Our user interfaces already work with
the CDP.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-31 Thread Jim Blandy
Google has indicated a willingness to participate in standardizing the
protocol.

If we switch from a devtools protocol used only by us to a tooling protocol
used by the rest of the industry, that is strictly an improvement over the
status quo, even if our implementation deviates from the others' to some
degree: developers whose tooling doesn't run into the points of
incompatibility can now include Firefox in their scripting and tests, and
the maintainers of tooling that use the CDP can work around
incompatibilities.

I think it's a mistake to analogize this too closely to content-exposed
APIs. Those are under extreme backwards compatibility pressure, of a sort
that I think doesn't hold here. Unlike a web user, who simply sees a broken
page and can't do anything about it, our audience is more technical, and
can do things like upgrade tools when things don't work.


On Thu, Aug 31, 2017 at 1:09 PM, James Graham <ja...@hoppipolla.co.uk>
wrote:

> On 31/08/17 19:42, Jim Blandy wrote:
>
>> Some possibly missing context: Mozilla Devtools wants to see this
>> implemented for our own use. After much discussion last summer in London,
>> the Firefox Devtools team decided to adopt the Chrome Debugging Protocol
>> for the console and the JavaScript debugger. (The cases for converting the
>> other tools like the Inspector are less compelling.)
>>
>> Speaking as the designer of Firefox's protocol, the CDP is a de-facto
>> standard. The Firefox protocol really has not seen much uptake outside
>> Mozilla, whereas the Chrome Debugging Protocol is implemented with varying
>> degrees of fidelity by several different browsers. "Proprietary" is not the
>> right term here, but in the sense of "used nowhere else", one could argue
>> that it is Mozilla that is using the proprietary protocol, not Chrome. In a
>> real sense, it is more consistent with Mozilla's mission for us to join the
>> rest of the community, implement the CDP for the tools where it makes
>> sense, and participate in its standardization, than to continue to push a
>> protocol nobody else uses.
>>
>
> I entirely agree that the current Firefox protocol is also proprietary.
> However I also assumed that it's considered an internal implementation
> detail rather than something we would expect people to interoperate with.
> If that wasn't the case then I apologise: I should have complained earlier
> :)
>
> Going forward, if we implement a "de-facto" standard that is not actually
> standardised, we are assuming a large risk, in addition to the problems
> around our stated values. An obvious concern is that Google are free to
> change the protocol as they like, including in ways that are intentionally
> or accidentally incompatible with other implementations. We also know from
> past experience of implementing "de-facto" standards that implementation
> differences end up hardcoded into third party consumers (i.e. web pages in
> the case of DOM APIs), making it impossible to get interoperability without
> causing intolerable short-term breakage. This has prevented standardisation
> and compatibility of "de-facto" standards like innerText and
> contentEditable, which remain nominally equivalent but actually very
> different in all browsers.
>
> If people are starting to standardise not just the protocol but also the
> semantics of CDP, that's great. But people tend to vastly underestimate how
> long standardisation will take, and overestimate the resources that they
> will find to work on it. So it would be good to see concrete progress
> before we are actually shipping.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-31 Thread Jim Blandy
On Thu, Aug 31, 2017 at 2:50 AM, James Graham 
wrote:

> In general it seems unfortunate if we are deciding to implement a
> proprietary protocol rather than opting to either extend something that is
> already a standard (e.g. WebDriver) or perform standardisation work
> alongside the implementation.


That would be unfortunate, I agree. I hope that's not what's happening
here. By adopting the CDP console and JS debugger protocols I think we are
doing exactly what Mozilla should do: encouraging the adoption and
formalization of de-facto standards.

What alternatives have we considered here?


The Devtools team had extensive discussions a year ago in London about
whether to continue to develop Firefox's own protocols, or whether to go
with what Chrome, Safari, and now Edge are using. We considered the
question on a tool-by-tool basis. Harald Kirschner has started to assemble
a list of the third-party tools that use the CDP.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-31 Thread Jim Blandy
Some possibly missing context: Mozilla Devtools wants to see this
implemented for our own use. After much discussion last summer in London,
the Firefox Devtools team decided to adopt the Chrome Debugging Protocol
for the console and the JavaScript debugger. (The cases for converting the
other tools like the Inspector are less compelling.)

Speaking as the designer of Firefox's protocol, the CDP is a de-facto
standard. The Firefox protocol really has not seen much uptake outside
Mozilla, whereas the Chrome Debugging Protocol is implemented with varying
degrees of fidelity by several different browsers. "Proprietary" is not the
right term here, but in the sense of "used nowhere else", one could argue
that it is Mozilla that is using the proprietary protocol, not Chrome. In a
real sense, it is more consistent with Mozilla's mission for us to join the
rest of the community, implement the CDP for the tools where it makes
sense, and participate in its standardization, than to continue to push a
protocol nobody else uses.

The devtools.html JavaScript debugger already implements the Chrome
protocol. We've implemented adapters like Valence that implement the
Firefox protocol in terms of the Chrome protocol. So while it's true that
not everything is documented at the standards of a WHATWG specification
(yet), in practical terms, there hasn't been much problem getting things
going.


On Thu, Aug 31, 2017 at 2:50 AM, James Graham 
wrote:

> On 31/08/17 02:14, Michael Smith wrote:
>
>> On 8/30/2017 15:56, David Burns wrote:
>>  > Do we know if the other vendors would see value in having this spec'ed
>> properly so that we have true interop here? Reverse engineering seems like
>> a "fun" project but what stops people from breaking stuff without realising?
>>
>> Fortunately we're not reverse engineering here (for the most part), all
>> protocol messages are specified in a machine-readable JSON format which
>> includes inline documentation [0] --- this is what the cdp Rust library
>> consumes. The spec is versioned and the authors do seem to follow a proper
>> process of introducing new features as "experimental", stabilizing mature
>> ones, and deprecating things before they're removed.
>>
>
> I think that the reverse engineering part is not the wire protocol, which
> is usually the most trivial part, but the associated semantics. It doesn't
> seem that useful to support the protocol unless we behave in the same way
> as Chrome in response to the messages. It's the specification of that
> behaviour which is — as far as I can tell — missing, and which seems likely
> to involve reverse engineering.
>
> In general it seems unfortunate if we are deciding to implement a
> proprietary protocol rather than opting to either extend something that is
> already a standard (e.g. WebDriver) or perform standardisation work
> alongside the implementation. What alternatives have we considered here? Is
> it possible to extend existing standards with missing features? Or are the
> current tools using the protocol so valuable that we don't have any choice
> but to support them on their terms? If it's the latter, or we just think
> the Chrome protocol is so technically superior to the other options that we
> would be foolish to ignore it, can we work with Google to get it
> standardised? I think some meaningful attempt at standardisation should be
> a prerequisite to this kind of protocol implementation shipping in Firefox.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-31 Thread Jim Blandy
Definitely, yes. As Michael said, some core subset of the protocol is
already a de-facto standard. The process of actual specification, with
committees and processes and imprimaturs and all that, is getting started,
with interest from several major players, including Microsoft and Google.

On Wed, Aug 30, 2017 at 3:56 PM, David Burns <dbu...@mozilla.com> wrote:

> Do we know if the other vendors would see value in having this spec'ed
> properly so that we have true interop here? Reverse engineering  seems like
> a "fun" project but what stops people from breaking stuff without
> realising?
>
> David
>
> On 30 August 2017 at 22:55, Michael Smith <li...@spinda.net> wrote:
>
> > Hi everyone,
> >
> > Mozilla DevTools is exploring implementing parts of the Chrome DevTools
> > Protocol ("CDP") [0] in Firefox. This is an HTTP, WebSockets, and JSON
> > based protocol for automating and inspecting running browser pages.
> >
> > Originally built for the Chrome DevTools, it has seen wider adoption with
> > outside developers. In addition to Chrome/Chromium, the CDP is supported
> by
> > WebKit, Safari, Node.js, and soon Edge, and an ecosystem of libraries and
> > tools already exists which plug into it, for debugging, extracting
> > performance data, providing live-preview functionality like the Brackets
> > editor, and so on. We believe it would be beneficial if these could be
> > leveraged with Firefox as well.
> >
> > The initial implementation we have in mind is an alternate target for
> > third-party integrations to connect to, in addition to the existing
> Firefox
> > DevTools Server. The Servo project has also expressed interest in adding
> > CDP support to improve its own devtools story, and a PR is in flight to
> > land a CDP server implementation there [1].
> >
> > I've been working on this project with guidance from Jim Blandy. We've
> > come up with the following approach:
> >
> > - A complete, typed Rust implementation of the CDP protocol messages and
> > (de)serialization lives in the "cdp" crate [2], automatically generated
> > from the protocol's JSON specification [3] using a build script (this
> > happens transparently as part of the normal Cargo compilation process).
> > This comes with Rustdoc API documentation of all messages/types in the
> > protocol [4] including textual descriptions bundled with the
> specification
> > JSON. The cdp crate will likely track the Chrome stable release for which
> > version of the protocol is supported. A maintainers' script exists which
> > can find and fetch the appropriate JSON [5].
> >
> > - The "tokio-cdp" crate [6] builds on the types and (de)serialization
> > implementation in the cdp crate to provide a server implementation built
> on
> > the Tokio asynchronous I/O system. The server side provides traits for
> > consuming incoming CDP RPC commands, executing them concurrently and
> > sending back responses, and simultaneously pushing events to the client.
> > They are generic over the underlying transport, so the same backend
> > implementation could provide support for "remote" clients plugging in
> over
> > HTTP/WebSockets/JSON or, for example, a browser-local client
> communicating
> > over IPDL.
> >
> > - In Servo, a new component plugs into the cdp and tokio-cdp crates and
> > acts on behalf of connected CDP clients in response to their commands,
> > communicating with the rest of the Servo constellation. This server is
> > disabled by default and can be started by passing a "--cdp" flag to the
> > Servo binary, binding a TCP listener to the loopback interface at the
> > standard CDP port 9222 (a different port can be specified as an option to
> > the flag).
> >
> > - The implementation we envision in Firefox/Gecko would act similarly: a
> > new Rust component, disabled by default and switched on via a command
> line
> > flag, which binds to a local port and mediates between Gecko internals
> and
> > clients connected via tokio-cdp.
> >
> > We chose to build this on Rust and the Tokio event loop, along with the
> > hyper HTTP library and rust-websocket which plug into Tokio.
> >
> > Rust and Cargo provide excellent facilities for compile-time code
> > generation which integrate transparently into the normal build process,
> > avoiding the need to invoke scripts by hand to keep generated artifacts
> in
> > sync. The Rust ecosystem provides libraries such as quote [7] and serde
> [8]
> > which allow us to auto-generate an efficient, typed, an

Re: JavaScript Binary AST Engineering Newsletter #1

2017-08-23 Thread Jim Blandy
Can code transmitted as an AST be source-mapped, so that developer tools
can work properly?

On Fri, Aug 18, 2017 at 4:39 AM, David Teller  wrote:

> Hey, all cool kids have exciting Engineering Newsletters these days, so
> it's high time the JavaScript Binary AST got one!
>
>
> # General idea
>
> JavaScript Binary AST is a joint project between Mozilla and Facebook to
> rethink how JavaScript source code is stored/transmitted/parsed. We
> expect that this project will help visibly speed up the loading of large
> codebases of JS applications, including web applications, and will have
> a large impact on the JS development community, including both web
> developers, Node developers, add-on developers and ourselves.
>
>
> # Context
>
> The size of JavaScript-based applications – starting with webpages –
> keep increasing, while the parsing speed of JavaScript VMs has basically
> peaked. The result is that the startup of many web/js applications is
> now limited by JavaScript parsing speed. While there are measures that
> JS developers can take to improve the loading speed of their code, many
> applications have reached a situation in which such an effort becomes
> unmanageable.
>
> The JavaScript Binary AST is a novel format for storing JavaScript code.
> The global objective is to decrease the time spent between
> first-byte-received and code-execution-starts. To achieve this, we focus
> on a new file format, which we hope will aid our goal by:
>
> - making parsing easier/faster
> - supporting parallel parsing
> - supporting lazy parsing
> - supporting on-the-fly bytecode generation
> - decreasing file size.
>
> For more context on the JavaScript Binary AST and alternative
> approaches, see the companion high-level blog post [1].
>
>
> # Progress
>
> ## Benchmarking Prototype
>
> The first phase of the project was spent developing an early prototype
> format and parser to validate our hypothesis that:
>
> - we can make parsing much faster
> - we can make lazy parsing much faster
> - we can reduce the size of files.
>
> The prototype built for benchmarking was, by definition, incomplete, but
> sufficient to represent ES5-level source code. All our benchmarking was
> performed on snapshots of Facebook’s chat and of the JS source code our
> own DevTools.
>
> While numbers are bound to change as we progress from a proof-of-concept
> prototype towards a robust and future-proof implementation, the results
> we obtained from the prototype are very encouraging.
>
> - parsing time 0.3 (i.e. parsing time is less than a third of original
> time)
> - lazy parsing time *0.1
> - gzipped file size vs gzipped human-readable source code *0.3
> - gzipped file size vs gzipped minified source code *0.95.
>
> Please keep in mind that future versions may have very different
> results. However, these values confirm that the approach can
> considerably improve performance.
>
> More details in bug 1349917.
>
>
> ## Reference Prototype
>
> The second phase of the project consisted of building a second prototype
> format designed to:
>
> - support future evolutions of JavaScript
> - support annotations designed to allow safe lazy/concurrent parsing
> - serve as a reference implementation for third-party developers
>
> This reference prototype has been implemented (minus compression) and is
> currently being tested. It is entirely independent from SpiderMonkey and
> uses Rust (for all the heavy data structure manipulation) and Node (to
> benefit from existing parsing/pretty-printing tool Babylon). It is
> likely that, as data structures stabilize, the reference prototype will
> migrate to a full JS implementation, so as to make it easier for
> third-party contributors to join in.
>
> More details on the tracker [2].
>
>
> ## Standard tracks
>
> As any proposed addition to the JavaScript language, the JavaScript
> Binary AST needs to go through standards body.
>
> The project has successfully been accepted as an ECMA TC39 Stage 1
> Proposal. Once we have a working Firefox implementation and compelling
> results, we will proceed towards Stage 2.
>
> More details on the tracker [3].
>
>
>
> # Next few steps
>
> There is still lots of work before we can land this on the web.
>
>
> ## SpiderMonkey implementation
>
> We are currently working on a SpiderMonkey implementation of the
> Reference Prototype, initially without lazy or concurrent parsing. We
> need to finish it to be able to properly test that JavaScript decoding
> works.
>
> ## Compression
>
> The benchmarking prototype only implemented naive compression, while the
> reference prototype – which carries more data – doesn’t implement any
> form of compression yet. We need to reintroduce compression to be able
> to measure the impact on file size.
>
>
>
> # How can I help?
>
> If you wish to help with the project, please get in touch with either
> Naveed Ihsanullah (IRC: naveed, mail: nihsanullah) or myself (IRC:
> Yoric, mail: dteller).
>
>
> [1] 

Re: Actually-Infallible Fallible Allocations

2017-08-03 Thread Jim Blandy
On Wed, Aug 2, 2017 at 7:58 AM, Ehsan Akhgari 
wrote:

> Vector reallocations show up in profiles all the time, literally in more
> than half of the profiles I look at every day.  If you examine for example
> the large dependency graph of https://bugzilla.mozilla.org/s
> how_bug.cgi?id=Speedometer_V2, a large number of optimizations that are
> landing every day are merely removing various types of extra needless
> buffer reallocations we have in profiles.


Okay, that's a surprise to me, and definitely something I will try to keep
in mind.

But my question wasn't, "is pre-reservation ever valuable?" but rather "is
it valuable in this particular code?" My assumption is that most code isn't
performance-sensitive, and so simplicity should be the priority.

The code Alexis cited at the top of the thread is here:
http://searchfox.org/mozilla-central/rev/f0e4ae5f8c40ba742214e89aba3f554da0b89a33/netwerk/base/Dashboard.cpp#435

Is Dashboard performance-sensitive?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Actually-Infallible Fallible Allocations

2017-08-01 Thread Jim Blandy
I have to ask: does anyone recall the benchmarks that showed that bounds
checks or vector reallocations were a measurable performance hit in this
code?

If we don't, we should just write simple, obviously correct code.

If we do, there should be a comment to that effect, or something to convey
to readers the performance constraints this code is trying to satisfy.

On Tue, Aug 1, 2017 at 2:10 PM, Kris Maglione  wrote:

> On Tue, Aug 01, 2017 at 01:28:37PM -0700, Eric Rahm wrote:
>
>> Both AppendElements and SetLength default initialize the values, there's
>> no
>> intermediate uninitialzed state. SetCapacity doesn't initialize the
>> values,
>> but they're also not indexable -- we have release bounds assertions --
>> unless you try really hard.
>>
>
> For a lot of cases, default initialized isn't much better than completely
> uninitialized. And it has an additional, unnecessary performance penalty
> for the pattern that you suggested.
>
> SetCapacity brings us back to the same problem where we either have
> unnecessary allocation checks for every element we append, or we skip
> sanity checks entirely and hope things stay sane if we refactor.
>
> With the infallible*() methods of the Vector class, though, we don't have
> to worry about any of that, and the code you suggested below becomes
> something like:
>
>  Vector sockets;
>  if (!sockets.reserve(inputs.length()))
>return false;
>
>  for (auto input : inputs)
>sockets.infallibleEmplaceBack(input);
>
> and we still get automatic allocation sanity checks and bounds accounting,
> but without any release overhead from default initialization, allocation
> checks, or unnecessary reallocations.
>
> And since that's the natural pattern for appending a fixed number of
> elements to a Vector, it doesn't really require any thought when writing
> it. The safe and efficient approach is basically the default option.
>
>
> nsTArray doesn't support emplace although it does have AppendElement(T&&),
>> but that wouldn't really help in this case. It's possible we could add
>> that
>> of course!
>>
>> -e
>>
>> On Tue, Aug 1, 2017 at 1:11 PM, Kris Maglione 
>> wrote:
>>
>> On Tue, Aug 01, 2017 at 12:57:31PM -0700, Eric Rahm wrote:
>>>
>>> nsTArray has various Append methods, in this case just using the
 infallible
 AppendElements w/o out a SetCapacity call would do the job. Another
 option
 would be to use SetLength which would default initialize the new
 elements.

 If we're trying to make things fast-but-safeish in this case, the
 preferred
 way is probably:

 auto itr = AppendElements(length, fallible);
 if (!itr) ...

 // no bounds checking
 for (...; i++, itr++)
   auto& mSocket = *itr;

 // bounds checking
 for (...)
auto& mSocket = *sockets[i];

 In general I agree the pattern of fallibly allocating and then fallibly
 appending w/o checking the return value is a bit silly. Perhaps we
 should
 just mark the fallible version MUST_USE? It looks like it's commented
 out
 for some reason (probably a ton of bustage).


>>> Honestly, I think the Vector infallible* methods are a much cleaner way
>>> to
>>> handle this, especially when we need something like
>>> infallibleEmplaceBack.
>>> They tend to encourage writing more efficient code, with fewer
>>> reallocations and allocation checks. And I think the resulting code tends
>>> to be safer than AppendElements followed by unchecked raw pointer access,
>>> placement `new`s, and an intermediate state where we have a bunch of
>>> indexable but uninitialized elements in the array.
>>>
>>> That's been my experience when reading and writing code using the various
>>> approaches, anyway.
>>>
>>>
>>> On Tue, Aug 1, 2017 at 12:18 PM, Kris Maglione 
>>>
 wrote:

 On Tue, Aug 01, 2017 at 12:31:24PM -0400, Alexis Beingessner wrote:

>
> I was recently searching through our codebase to look at all the ways
> we
>
>> use fallible allocations, and was startled when I came across several
>> lines
>> that looked like this:
>>
>> dom::SocketElement  = *sockets.AppendElement(fallible);
>> ...
>> So in isolation this code is saying "I want to handle allocation
>> failure"
>> and then immediately not doing that and just dereferencing the result.
>> This
>> turns allocation failure into Undefined Behaviour, rather than a
>> process
>> abort.
>>
>> Thankfully, all the cases where I found this were preceded by
>> something
>> like the following:
>>
>> uint32_t length = socketData->mData.Length();if
>> (!sockets.SetCapacity(length, fallible)) {   JS_ReportOutOfMemory(cx);
>>  return NS_ERROR_OUT_OF_MEMORY;}for (uint32_t i = 0; i <
>> socketData->mData.Length(); i++) {dom::SocketElement  =
>> *sockets.AppendElement(fallible);

Re: [PATCH] mfbt: Poison: drop obsolete OS2 support

2017-07-31 Thread Jim Blandy
Please read our documentation on submitting patches to Firefox:

https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/How_to_Submit_a_Patch


On Mon, Jul 31, 2017 at 12:28 AM, Enrico Weigelt, metux IT consult <
enrico.weig...@gr13.net> wrote:

> Signed-off-by: Enrico Weigelt, metux IT consult 
> ---
>  mfbt/Poison.cpp | 34 +-
>  1 file changed, 1 insertion(+), 33 deletions(-)
>
> diff --git a/mfbt/Poison.cpp b/mfbt/Poison.cpp
> index b2767011d..e9981764f 100644
> --- a/mfbt/Poison.cpp
> +++ b/mfbt/Poison.cpp
> @@ -14,7 +14,7 @@
>  #include "mozilla/Assertions.h"
>  #ifdef _WIN32
>  # include 
> -#elif !defined(__OS2__)
> +#else
>  # include 
>  # include 
>  # ifndef MAP_ANON
> @@ -76,38 +76,6 @@ GetDesiredRegionSize()
>
>  #define RESERVE_FAILED 0
>
> -#elif defined(__OS2__)
> -static void*
> -ReserveRegion(uintptr_t aRegion, uintptr_t aSize)
> -{
> -  // OS/2 doesn't support allocation at an arbitrary address,
> -  // so return an address that is known to be invalid.
> -  return (void*)0xFFFD;
> -}
> -
> -static void
> -ReleaseRegion(void* aRegion, uintptr_t aSize)
> -{
> -  return;
> -}
> -
> -static bool
> -ProbeRegion(uintptr_t aRegion, uintptr_t aSize)
> -{
> -  // There's no reliable way to probe an address in the system
> -  // arena other than by touching it and seeing if a trap occurs.
> -  return false;
> -}
> -
> -static uintptr_t
> -GetDesiredRegionSize()
> -{
> -  // Page size is fixed at 4k.
> -  return 0x1000;
> -}
> -
> -#define RESERVE_FAILED 0
> -
>  #else // Unix
>
>  #include "mozilla/TaggedAnonymousMemory.h"
> --
> 2.11.0.rc0.7.gbe5a750
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-17 Thread Jim Blandy
BTW, speaking of training: Jason's and my book, "Programming Rust" will be
available on paper from O'Reilly on August 29th! Steve Klabnik's book with
No Starch Press is coming out soon as well, but I don't know the details
there.

On Mon, Jul 17, 2017 at 12:43 PM, Ted Mielczarek  wrote:

> Nick,
>
> Thanks for kicking off this discussion! I felt like a broken record
> talking to people about this in SF. From my perspective Rust is our
> single-biggest competitive advantage for shipping Firefox, and every
> time we choose C++ over Rust we throw that away. We know the costs of
> shipping complicated C++ code: countless hours of engineering time spent
> chasing down hard-to-reproduce crashes, exploitable security holes, and
> threading issues. Organizationally we need to get to a point where every
> engineer has the tools and training they need to make Rust their first
> choice for writing code that ships with Firefox.
>
> On Mon, Jul 10, 2017, at 09:15 PM, Bobby Holley wrote:
> > I think this is pretty uncontroversial. The high-level strategic decision
> > to bet on Rust has already been made, and the cost of depending on the
> > language is already sunk. Now that we're past that point, I haven't heard
> > anyone arguing why we shouldn't opt for memory safety when writing new
> > standalone code. If there are people out there who disagree and think
> > they
> > have the arguments/clout/allies to make the case, please speak up.
>
> From my anecdotal experiences, I've heard two similar refrains:
> 1) "I haven't learned Rust well enough to feel confident choosing it for
> this code."
> 2) "I don't want to spend time learning Rust that I could be spending
> just writing the code in C++."
>
> I believe that every developer that writes C++ at Mozilla should be
> given access to enough Rust training and work hours to spend learning it
> beyond the training so that we can eliminate case #1. With the Rust
> training sessions at prior All-Hands and self-motivated learning, I
> think we've pretty well saturated the group of early adopters. These
> people are actively writing new Rust code. We need to at least get the
> people that want to learn Rust but don't feel like they've had time to
> that same place.
>
> For case #2, there will always be people that don't want to learn new
> languages, and I'm sympathetic to their perspective. Learning Rust well
> does take a large investment of time. I don't know that I would go down
> the road of making Rust training mandatory (yet), but we are quickly
> going to hit a point where "I don't feel like learning Rust" is not
> going to cut it anymore. I would hope that by that point we will have
> trained everyone well enough that case #2 no longer exists, but if not
> we will have to make harder choices.
>
>
> > The tradeoffs come when the code is less standalone, and we need to weigh
> > the integration costs. This gets into questions like whether/how Rust
> > code
> > should integrate into the cycle collector or into JS object reflection,
> > which is very much a technical decision that should be made by experts. I
> > have a decent sense of who some of those experts might be, and would like
> > to find the most lightweight mechanism for them to steer this ship.
>
> We definitely need to figure out an ergonomic solution for writing core
> DOM components in Rust, but I agree that this needs a fair bit of work
> to be feasible. Most of the situations I've seen recently were not that
> tightly integrated into Gecko.
>
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator Update, July 2017

2017-07-13 Thread Jim Blandy
Yeah, this is kind of the opposite of "No New Rationale".

https://air.mozilla.org/friday-plenary-rust-and-the-community/


On Thu, Jul 13, 2017 at 2:49 PM, David Anderson  wrote:

> On Thursday, July 13, 2017 at 1:38:18 PM UTC-7, Joe Hildebrand wrote:
> > I'm responding at the top of the thread here so that I'm not singling
> out any particular response.
> >
> > We didn't make clear in this process how much work Mark and his team did
> ahead of the decision to gather feedback from senior engineers on both
> Selena and my teams, and how deeply committed the directors have been in
> their support of this change.
> >
> > Seeing a need for more modern patch reviewing at Mozilla, Laura
> Thomson's team did an independent analysis of the most popular tools
> available on the market today.  Working with the NSS team to pilot their
> selected tool, they found that Phabricator is a good fit for our
> development approach (coincidentally a good enough fit that the Graphics
> team was also piloting Phabricator in parallel).  After getting the support
> of the Engineering directors, they gathered feedback from senior engineers
> and managers on the suggested approach, and tweaked dates and features to
> match up with our release cycles more appropriately.
>
> The problem is this hasn't been transparent. It was announced as an edict,
> and I don't remember seeing any public discussion beforehand. If senior
> engineers were consulted, it wasn't many of them - and the only person I
> know who was, was consulted after the decision was made.
>
> I've contributed thousands of patches over many years and haven't really
> seen an explanation of how this change will make my development process
> easier. Maybe it will, or maybe it won't. It probably won't be a big deal
> because this kind of tooling is not really what makes development hard. I
> don't spend most of my day figuring out how to get a changeset from one
> place to another.
>
> The fact is that no one really asked us beforehand, "What would make
> development easier?" We're just being told that Phabricator will. That's
> why people are skeptical.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator Update, July 2017

2017-07-13 Thread Jim Blandy
Many people seem to be asking, essentially: What will happen to old bugs?
I'm trying to follow the discussion, and I'm not clear on this myself.

For example, "Splinter will be turned off." For commenting and reviewing,
okay, understood. What about viewing patches on old bugs?

Yes, Phabricator will segregate review comments and general discussion. But
keeping all the comments in one thread is a mixed blessing, too; for
example, check out bug 1271650 and try to tell me what the status of each
patch is. "Well, those patches should be split across several bugs." Okay,
but that fragments the conversation too.

There's an inherent tension here, and Ye Olde Bugzilla Patch Review Process
is more of a lovingly polished turd than a good solution.


On Thu, Jul 13, 2017 at 8:39 PM, Boris Zbarsky  wrote:

> On 7/13/17 9:04 PM, Mark Côté wrote:
>
>> It is also what newer systems
>> do today (e.g. GitHub and the full Phabricator suite)
>>
>
> I should note that with GitHub what this means is that you get discussion
> on the PR that should have gone in the issue, with the result that people
> following the issue don't see half the relevant discussion. In particular,
> it's common to go off from "reviewing this line of code" to "raising
> questions about what the desired behavior is", which is squarely back in
> issue-land, not code-review land.
>
> Unfortunately, I don't know how to solve that problem without designating
> a "central point where all discussion happens" and ensuring that anything
> outside it is mirrored there...
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread Jim Blandy
It seems like there is actually not a consensus on this. (I had thought
Smaug's view was the consensus, and found bz's post surprising.)

Both approaches have tradeoffs. There's a good reason we require the bug
number front and center in a commit message. But I dare you to read the Web
Replay bug  for
context; it's currently on comment 366. (Granted, that bug should have been
subdivided, but there are other examples.)

If this is worth settling, we should definitely *not* settle it on this
list. A mailing list is fine for making one's case or pointing out issues,
but the final decision should be made by a small group of engineering
leadership, and then documented on wiki.m.o. (If it isn't there already! I
looked but couldn't find anything definitive...)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: We've forked the Breakpad client code

2017-02-09 Thread Jim Blandy
Under the circumstances, I'll volunteer to review, if that's feasible.

On Thu, Feb 9, 2017 at 12:37 PM, Ted Mielczarek  wrote:

> On Thu, Feb 9, 2017, at 02:47 PM, Aaron Klotz wrote:
> > This is great news, Ted!
> >
> > Are you going to be creating a module for this? Who are the peers?
>
> I don't think a new module is necessary, we've covered the existing
> integration code (nsExceptionHandler.cpp etc) under the Toolkit module
> for a long time and I think it's been OK. If it becomes a problem we can
> certainly reevaluate. There aren't a lot of people that are comfortable
> reviewing this code, but that's not exactly a unique situation in Gecko.
>
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: sysconf(_SC_NPROCESSORS_ONLN) is apparently not reliable on Android / ARM

2017-01-23 Thread Jim Blandy
There are plans to move the js/src/threading primitives into MFBT, so I
don't think the core count is out of scope.


On Mon, Jan 23, 2017 at 11:53 AM, Gabriele Svelto 
wrote:

> On 23/01/2017 18:44, Gian-Carlo Pascutto wrote:
> > If only we had some crossplatform runtime that abstracted such system
> > specifics away from us
> > (https://bugzilla.mozilla.org/show_bug.cgi?id=663970) then maybe we
> > wouldn't have to re-fix the same bugs every 5 years.
>
> On this topic, I've heard multiple times from multiple sources to steer
> away from NSPR for cross-platform stuff and use MFBT instead as much as
> possible. But MFBT doesn't even come close to supporting all the
> system-specific stuff that NSPR supports. Do we plan on extendind MFBT
> to cover what's missing?
>
>  Gabriele
>
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust required to build Gecko

2016-12-20 Thread Jim Blandy
All the more reason for "mach" to be the exclusive place to put all these
smarts.

The only things I really want anyway are:

mk_add_options MOZ_OBJDIR=obj-bug
ac_add_options --enable-debug='-g3 -O0 -fno-inline'
ac_add_options --disable-optimize


On Tue, Dec 20, 2016 at 10:41 PM, J. Ryan Stinnett <jry...@gmail.com> wrote:

> On Wed, Dec 21, 2016 at 12:23 AM, Jim Blandy <jbla...@mozilla.com> wrote:
> > I had a .mozconfig file that included the line:
> >
> > . "$topsrcdir/build/mozconfig.common"
>
> My understanding is that we're generally not supposed to include the
> in-tree mozconfigs in our local builds, since they are free to make
> various automation-only assumptions, which seems to be what happened
> here. It's tempting to do it anyway though, for some abstract feeling
> that the build will more closely resemble an official one. (I used to
> do something like this as well!)
>
> I believe many people have fallen into this trap over the years, most
> likely because of MDN pages[1] that appeared to suggest using
> automation mozconfigs directly (there's at least a warning on the page
> now).
>
> [1]: https://developer.mozilla.org/en-US/docs/Mozilla/Developer_
> guide/Build_Instructions/Configuring_Build_Options#
> Example_.mozconfig_Files
>
> - Ryan
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust required to build Gecko

2016-12-20 Thread Jim Blandy
I had a .mozconfig file that included the line:

. "$topsrcdir/build/mozconfig.common"

This turns out to cause problems: it loads build/mozconfig.rust, which says
to look for rustc in $topsrcdir/rustc/bin, leading to errors like this:

 0:05.53 checking for rustc... not found
 0:05.53 DEBUG: rustc: Trying /home/jimb/moz/dbg/rustc/bin/rustc
 0:05.53 ERROR: Cannot find rustc
 0:05.54 *** Fix above errors and then restart with\
 0:05.54"/usr/bin/gmake -f client.mk build"
 0:05.54 client.mk:375: recipe for target 'configure' failed
 0:05.54 gmake: *** [configure] Error 1

I remove the line loading mozconfig.common and things seem to be going well.




On Fri, Dec 16, 2016 at 3:06 PM, Simon Sapin  wrote:

> On 16/12/16 19:26, Ralph Giles wrote:
>
>> Anyway, thanks for the suggestion, Simon. I filed
>> https://bugzil.la/1324040 with this fix.
>>
>> In the meantime, the curl command line from https://rustup.rs/ should
>> work equivalently.
>>
>
> Oops, I wrote this earlier and forgot to click Send.
>
> --
>
> http://docs.python-requests.org/en/master/community/faq/#wha
> t-are-hostname-doesn-t-match-errors says that Python 2.7.9+ supports SNI
> natively.
>
> It looks like Requests can make it work on older Python if some other
> libraries (including PyOpenSSL) are available:
>
> https://stackoverflow.com/questions/18578439/using-requests-
> with-tls-doesnt-give-sni-support/18579484#18579484
>
> --
> Simon Sapin
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows CPU power settings

2016-12-20 Thread Jim Blandy
On Fedora, that'd be the kernel-tools package, and the "cpupower
frequency-info" command, I think?


On Sat, Dec 17, 2016 at 11:04 PM, Bobby Holley 
wrote:

> It looks like there are similar (though not as bad) shenanigans on Linux.
>
> In a fresh Ubuntu install, there are two available frequency governors,
> "powersave" and "performance". The default is "powersave", which seems
> suboptimal on a Desktop Xeon. The intel_pstate driver doesn't support
> manually pegging the clock, but the "performance" governor seems generous
> enough that it probably doesn't matter.
>
> Installing cpufrequtils and then setting the governor in
> /etc/init.d/cpufrequtils to "performance" seemed to do the trick. You can
> get a live read on clock speeds with cpufreq-aperf, which should show all
> logical CPUs pegged to/near their max during a clobber build.
>
> Changing this seemed to take a clobber build from 8:45 to 8:30, though I
> didn't remeasure in powersave.
>
> HTH,
> bholley
>
> On Tue, Dec 13, 2016 at 6:31 AM, Ben Kelly  wrote:
>
> > On Mon, Dec 12, 2016 at 8:44 PM, Gregory Szorc  wrote:
> >
> > > The Windows 10 power settings appear to set the minimum CPU frequency
> at
> > 5%
> > > or 10% of maximum. When I cranked this up to 100%, artifact build time
> > > dropped from ~170s to ~77s and full build configure dropped from ~165s
> to
> > > ~97s!
> > >
> > > If you are a Windows user with Xeons in your desktop, you may want to
> > visit
> > > Control Panel -> Hardware and Sound -> Power Options -> Edit Plan
> > Settings
> > > -> Change advanced power settings -> Process power management ->
> Minimum
> > > processor state and crank that up and see what happens. Note: running
> > your
> > > CPU at 100% all the time may impact your power bill!
> > >
> >
> > FWIW, in my windows 10 Control Panel -> Hardware and Sound -> Power
> Options
> > I had 3 preset power profiles:
> >
> > * Balanced (default selected)
> > * Power Saver
> > * Performance
> >
> > The "Balanced" profile has the 5% minimum clock speed.  The "Performance"
> > profile set that to 100%.
> >
> > Ben
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2016-12-20 Thread Jim Blandy
Can't people use Let's Encrypt to obtain a certificate for free without the
usual CA run-around?

https://letsencrypt.org/getting-started/

"Let’s Encrypt is a free, automated, and open certificate authority brought
to you by the non-profit Internet Security Research Group (ISRG)."


On Tue, Dec 20, 2016 at 6:38 AM,  wrote:

> This is a good idea but a terrible implementation.  I already need someone
> else's approval (registrar) to run a website (unless I want visitors to
> remember my IP addresses).  NOW I will need ANOTHER someone to approve it
> as well (the CA authority), (unless I want visitors to click around a bunch
> of security "errors").
>
> We shouldn't be ADDING authorities required to make websites.  The web is
> open and free and this proposal adds authority to a select few who can
> dictate whats a "valid" site and what isn't.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla's x86 Linux builds now require SSE2

2016-11-28 Thread Jim Blandy
But, how can we so casually drop support for processors manufactured as
recently as 2003? Wasn't there any discussion of this decision?

Just kidding, I'm glad we can finally put the x87 behind us, and I know
there was *lots* of discussion...


On Sun, Nov 27, 2016 at 1:25 AM, Henri Sivonen  wrote:

> https://bugzilla.mozilla.org/show_bug.cgi?id=1274196 has landed for
> Firefox 53. It makes build for 32-bit x86 Linux made on Mozilla's
> infrastructure require SSE2. It does not change the defaults for
> builds made elsewhere.
>
> This effectively means that x86+SSE2 is tier-1 but x86 potentially
> without SSE2 is in the same tier-3 bucket as other CPU architectures
> that Mozilla doesn't provide builds for but Linux distros do.
>
> Mozilla-made builds that require for 32-bit x86 Linux check for SSE2
> on startup and exit with an error message to stderr if SSE2 is not
> found. Depending on their distribution, the affected users may have
> the recourse of obtaining a build that doesn't require SSE2 from their
> Linux distribution. For the duration of the ESR 52 cycle, the users
> could, alternatively, switch to ESR, but the error message does not
> point this out.
>
> For updates, the capability to report the presence of SSE2 went into
> Firefox 51, so the updater can be smart about showing a tombstone SUMO
> link vs. updating.
>
> Notably, this means that 387 floating-point math is no longer tier-1
> now that all tier-1 x86 platforms require SSE2 and, therefore, can use
> standard-based SSE2 floating-point math instead of legacy 387
> floating-point math. x86 Mac and x86 Android never shipped on non-SSE2
> CPUs, so SSE2 has been part of their baseline since the beginning.
> Mozilla's Windows builds have required SSE2 since Firefox 49. SSE2 is
> also a mandatory part of x86_64.
>
> Since the 32-bit x86 targets that are tier-1 for Rust require SSE2,
> this enables us to use rustc with a tier-1 target configuration in
> Mozilla-made Firefox builds for 32-bit x86 Linux.
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Would "Core :: Debugging: gdb" be a good bugzilla component?

2016-09-16 Thread Jim Blandy
I think a meta bug would meet all the requirements.

- You'd have to tell people about the component anyway, just as you would a
metabug.

- Components have textual names instead of numbers, but you can give bugs
names that you can use in the blocker field.

- You can get mail for new bugs added as blockers to the metabug, which is
somewhat like watching a component.


On Fri, Sep 16, 2016 at 9:48 AM, Andrew McCreight 
wrote:

> You could also just file things like that in the corresponding component,
> and maybe create some kind of meta bug for gdb hooks. So debugging for
> nsTArray would be filed in XPCOM, etc. I think that's the way it works for
> Javascript engine related debugging hooks. Failing that, having the nested
> gdb inside Debugger seems like overkill given that no top level
> Core::Debugger component exists right now. Maybe if in the future there's a
> flood of gdb and lldb bugs it would be further subdivided.
>
> On Fri, Sep 16, 2016 at 7:50 AM, Andrew Sutherland <
> asutherl...@asutherland.org> wrote:
>
> > Right now the gecko gdb pretty-printers have been tracked using bugs in
> > "Core :: Build Config" because that's where the bug that first added them
> > lived.  That component currently has 1773 bugs in it, 3 of which involve
> > gdb, and with daily activity of ~10 bugs a day.  I think a more targeted
> > component could be useful since gdb users could watch the component
> without
> > getting the comparative fire hose of Build Config.  Right now
> notifications
> > would need to come via explicit CC or watching for changes in
> dependencies
> > of the existing bugs.
> >
> > The component would presumably cover not just the gdb pretty printers but
> > also other gdb related issues.
> >
> > The triggering factor is that I made a foolish mistake with the
> > nsTHashtable pretty-printer.  It has been telling lies, see
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1303174 (fix on inbound,
> > with apologies to anyone misled by the broken pretty-printer).
> >
> > Andrew
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2016-09-12 Thread Jim Blandy
So, in this pattern, if the target thread's execution of SomeRunnable::Run
could fail, causing ~SomeRunnable to call mMyRef's destructor and thus free
the referent on the target thread, that would be a bug?

On Mon, Sep 12, 2016 at 12:47 PM, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 9/12/16 3:40 PM, Jim Blandy wrote:
>
>> Could you go into more detail here? Either the caller will free it, or the
>> callee will free it --- but they're both on the same thread.
>>
>
> We have this pretty common pattern for handing references around threads
> safely:
>
> Main thread:
>
>   RefPtr runnable = new SomeRunnable(myRef.forget());
>   someTarget->Dispatch(runnable.forget());
>
> Target thread, runnable's Run method:
>
>   RefPtr runnable = new ResponseRunnable(mMyRef.forget
> ());
>   originalThread->Dispatch(runnable.forget());
>
> And then back on the original thread the thing in myRef is worked with
> again.  The idea is to keep that thing alive without ever refcounting it on
> the target thread, because for example it may not have a threadsafe
> refcount implementation.
>
> If the callee
>> puts the reference someplace where another thread will free it, then
>> that's
>> the operation that needs to be audited.
>>
>
> Right, this all assumes that the Dispatch() calls above are infallible.
> Which they are, iirc.
>
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2016-09-12 Thread Jim Blandy
On Mon, Sep 12, 2016 at 12:22 PM, Boris Zbarsky  wrote:

It brings up one potential concern with Move: since the
>
>> callee might not take the value (intentionally or unintentionally), the
>> caller must *refrain* from caring about the state of its Move()'d
>> variable, between the Move() operation and any reassignment/cleanup.
>>
>
> It's worse than that.  For a wide range of callers (anyone handing the ref
> across threads), the caller must check immediately whether the callee
> actually took the value and if not make sure things are released on the
> proper thread...
>

Could you go into more detail here? Either the caller will free it, or the
callee will free it --- but they're both on the same thread. If the callee
puts the reference someplace where another thread will free it, then that's
the operation that needs to be audited.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR: How to search for a file that has since been removed from the tree?

2016-08-26 Thread Jim Blandy
On Fri, Aug 26, 2016 at 1:41 PM, Gregory Szorc  wrote:

> $ time hg log dom/canvas/WebGLContextReporter.cpp > /dev/null
> 0.110s real
>
> $ time git log -- dom/canvas/WebGLContextReporter.cpp > /dev/null
> 3.175s
>

Yes --- For similar reasons, CVS handled this case very well too!  [angelic
smile]
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New [must_use] property in XPIDL

2016-08-24 Thread Jim Blandy
On Wed, Aug 24, 2016 at 12:17 PM, R Kent James  wrote:

> But for the record, there are LOTS of issues in Mozilla code that are
> missing a "part of writing production code" My own pet peeve is JS code
> that gives no hint about the types of inputs, when there are complex
> assumptions about the types. Or C++ files that give you no hint about why
> they exist, and their fundamental purpose. How would you like it if I
> pushed a patch that shutdown all Firefox code development until all of
> these documentation issues were cleaned up? That might not be the highest
> priority today, right, even though it is a "fundamental part of writing
> production code" (as well as my pet peeve and a high priority to me)?
>

Fair enough. If someone simply broke all of Mozilla Central and said,
"Well, it's good engineering to do it this way", that wouldn't fly: within
Mozilla Central, the burden is on the annotator to keep the code working.
And indeed, the way the change was done, marking up interfaces is a
follow-up process that can proceed incrementally. Markup is rate-limited by
the rule that patches mustn't break the build.

So I guess what you're you're saying is that, out-of-tree, it's the
reverse: the burden rests on people who have no interest in the interface
being annotated, and who possibly aren't even familiar with the affected
code in their application. There's no rate limit on the impact on
out-of-tree code.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New [must_use] property in XPIDL

2016-08-24 Thread Jim Blandy
On Mon, Aug 22, 2016 at 4:39 PM, R Kent James  wrote:

> Exactly, and I hope that you and others restrain your exuberance a
> little bit for this reason. A warning would be one thing, but a hard
> failure that forces developers to drop what they are doing and think
> hard about an appropriate check is just having you set YOUR priorities
> for people rather than letting people do what might be much more
> important work.
>

it seems to me that propagating errors correctly is a just fundamental part
of writing production code. It's more like a coding standards issue, not
something that's practical to treat as a question of personal preference
and priorities.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: snake_case C++ in m-c (was: Re: C++ Core Guidelines)

2016-08-15 Thread Jim Blandy
On Mon, Aug 15, 2016 at 11:59 AM, Bobby Holley 
wrote:

> On Mon, Aug 15, 2016 at 11:53 AM, Henri Sivonen 
> wrote:
>
>> What I'm asking is:
>>
>> When I take encoding_rs_cpp.h and adapt it to XPCOM/MFBT types for use
>> in Gecko, should this be
>> Encoding::for_label(const nsACString& label) // change only types that
>> need changing
>> or
>> Encoding::ForLabel(const nsACString& aLabel) // change naming style, too
>> ?
>>
>
>
> The latter, IMO.
>

I would agree.

The disadvantage is that people familiar with the Rust API would now have
to study the C++ API separately, because they're no longer trivially
related.

But you've already adapted the argument types to Gecko, and presumably made
other sorts of changes to provide an idiomatic C++ API. That's a good
choice, in my opinion, but it means you've already given up the prospect of
taking familiarity with the Rust API and applying it immediately to use the
C++ API.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: snake_case C++ in m-c (was: Re: C++ Core Guidelines)

2016-08-15 Thread Jim Blandy
tl;dr: There's a reason it's called "mangling"...

On Mon, Aug 15, 2016 at 8:45 AM, Jim Blandy <jbla...@mozilla.com> wrote:

> I suggest that we start allowing snake_case C++ in m-c so that C++
>> wrappers for the C interfaces to Rust code can be named with mere
>> copy-paste of the Rust method names and so that we don't need to
>> change naming style of GSL stuff regardless of whether what's in the
>> tree is a Mozilla polyfill for GSL, a third-party polyfill (for legacy
>> compilers) of GSL or GSL itself.
>>
>
> Has anyone suggested mangling C++ bindings for Rust definitions into
> Mozilla C++ style, making them different from the Rust names? I'm opposed!
> :)
>
> In the most general case, cross-language adapters must mangle the names
> they deal with. For example, XPIDL has to affix something to IDL attribute
> names when naming the C++ accessors, and "Get" and "Set" prefixes aren't
> worse than anything else. And maybe the languages just have different
> identifier syntaxes, so mangling is necessary simply to produce well-formed
> code. But when an unmangled conversion is possible, that seems clearly
> preferable. We're using Cheddar to produce C headers for our Rust mp4parse
> crate; as far as I can see, Cheddar doesn't mangle Rust names.
>
> The Mozilla C++ style applies only to identifiers defined in Mozilla's C++
> code base, not things that we merely use that are defined elsewhere. When
> we use upstream code, we use its definitions in the form they're offered. I
> think Rust code should be treated similarly to "upstream" code in that
> sense, and the C++ should use the Rust names unchanged.
>
> It's true that the benefit from a convention increases the more
> consistently it's applied, but as long as we want to use upstream code
> bases, we must work in a multi-style world, so universal conformance is
> just never going to happen. In that light, the C++ standard and GSL are
> just two more cases of an upstream project not matching Mozilla style.
>
> You don't quite spell it out out, but I feel like you're hoping to argue
> that the C and C++ standard libraries' use of snake_case suggests that it's
> somehow acceptable, or ought to be acceptable, for Mozilla C++ definitions
> too. But these are mostly arbitrary decisions, and our well-established
> arbitrary decision is that that's not acceptable.
>
>
> On Mon, Aug 15, 2016 at 6:56 AM, Henri Sivonen <hsivo...@hsivonen.fi>
> wrote:
>
>> We've already established that Rust code in m-c will use the Rust
>> coding style, so we'll have snake_case methods and functions in Rust
>> code in m-c. Also, Rust code that exposes an FFI interface typically
>> does so with snake_case functions, which look natural both for Rust
>> and for C as C style is influenced by the C standard library.
>>
>> When a Rust library provides a C++ interface, the C++ interface is
>> built on top of the C/FFI interface. Per above, the Rust and C layers
>> use snake_case for methods/functions.
>>
>> As it happens, the C++ standard library also uses snake_case, so for a
>> C++ interface to a Rust library outside of the Gecko context, it's not
>> unnatural to use snake_case methods on the C++ layer, too. Like this:
>> https://github.com/hsivonen/encoding_rs/blob/master/include/
>> encoding_rs_cpp.h
>>
>> Since Gecko has does not use C++ standard-library strings and, at
>> least currently, does not use GSL, a slightly different C++ wrapper is
>> called for in the Gecko case.
>>
>> But should such a wrapper just use XPCOM nsACString and MFBT Range or
>> should it also change the names of the methods to follow Gecko case
>> rules?
>>
>> Relatedly, on the topic of MFBT Range and GSL, under the subject "C++
>> Core Guidelines" Jim Blandy <jbla...@mozilla.com> wrote:
>> > Given GSL's pedigree, I was assuming that we'd bring it in-tree and
>> switch
>> > out MFBT facilities with the corresponding GSL things as they became
>> > available.
>> >
>> > One of the main roles of MFBT is to provide polyfills for features
>> > standardized in C++ that we can't use yet for toolchain reasons
>> (remember
>> > MOZ_OVERRIDE?); MFBT features get removed as we replace them with the
>> > corresponding std thing.
>>
>> I'd have expected a polyfill that expects to be swapped out to use the
>> naming of whatever it's polyfill for, except maybe for the namespace.
>> Since MFBT has mozilla::UniquePtr instead of mozilla::unique_ptr, I
>> had understood mozilla::UniquePtr as a long-term Gecko-specific
>> implementation of th

Re: snake_case C++ in m-c (was: Re: C++ Core Guidelines)

2016-08-15 Thread Jim Blandy
>
> I suggest that we start allowing snake_case C++ in m-c so that C++
> wrappers for the C interfaces to Rust code can be named with mere
> copy-paste of the Rust method names and so that we don't need to
> change naming style of GSL stuff regardless of whether what's in the
> tree is a Mozilla polyfill for GSL, a third-party polyfill (for legacy
> compilers) of GSL or GSL itself.
>

Has anyone suggested mangling C++ bindings for Rust definitions into
Mozilla C++ style, making them different from the Rust names? I'm opposed!
:)

In the most general case, cross-language adapters must mangle the names
they deal with. For example, XPIDL has to affix something to IDL attribute
names when naming the C++ accessors, and "Get" and "Set" prefixes aren't
worse than anything else. And maybe the languages just have different
identifier syntaxes, so mangling is necessary simply to produce well-formed
code. But when an unmangled conversion is possible, that seems clearly
preferable. We're using Cheddar to produce C headers for our Rust mp4parse
crate; as far as I can see, Cheddar doesn't mangle Rust names.

The Mozilla C++ style applies only to identifiers defined in Mozilla's C++
code base, not things that we merely use that are defined elsewhere. When
we use upstream code, we use its definitions in the form they're offered. I
think Rust code should be treated similarly to "upstream" code in that
sense, and the C++ should use the Rust names unchanged.

It's true that the benefit from a convention increases the more
consistently it's applied, but as long as we want to use upstream code
bases, we must work in a multi-style world, so universal conformance is
just never going to happen. In that light, the C++ standard and GSL are
just two more cases of an upstream project not matching Mozilla style.

You don't quite spell it out out, but I feel like you're hoping to argue
that the C and C++ standard libraries' use of snake_case suggests that it's
somehow acceptable, or ought to be acceptable, for Mozilla C++ definitions
too. But these are mostly arbitrary decisions, and our well-established
arbitrary decision is that that's not acceptable.

On Mon, Aug 15, 2016 at 6:56 AM, Henri Sivonen <hsivo...@hsivonen.fi> wrote:

> We've already established that Rust code in m-c will use the Rust
> coding style, so we'll have snake_case methods and functions in Rust
> code in m-c. Also, Rust code that exposes an FFI interface typically
> does so with snake_case functions, which look natural both for Rust
> and for C as C style is influenced by the C standard library.
>
> When a Rust library provides a C++ interface, the C++ interface is
> built on top of the C/FFI interface. Per above, the Rust and C layers
> use snake_case for methods/functions.
>
> As it happens, the C++ standard library also uses snake_case, so for a
> C++ interface to a Rust library outside of the Gecko context, it's not
> unnatural to use snake_case methods on the C++ layer, too. Like this:
> https://github.com/hsivonen/encoding_rs/blob/master/include/
> encoding_rs_cpp.h
>
> Since Gecko has does not use C++ standard-library strings and, at
> least currently, does not use GSL, a slightly different C++ wrapper is
> called for in the Gecko case.
>
> But should such a wrapper just use XPCOM nsACString and MFBT Range or
> should it also change the names of the methods to follow Gecko case
> rules?
>
> Relatedly, on the topic of MFBT Range and GSL, under the subject "C++
> Core Guidelines" Jim Blandy <jbla...@mozilla.com> wrote:
> > Given GSL's pedigree, I was assuming that we'd bring it in-tree and
> switch
> > out MFBT facilities with the corresponding GSL things as they became
> > available.
> >
> > One of the main roles of MFBT is to provide polyfills for features
> > standardized in C++ that we can't use yet for toolchain reasons (remember
> > MOZ_OVERRIDE?); MFBT features get removed as we replace them with the
> > corresponding std thing.
>
> I'd have expected a polyfill that expects to be swapped out to use the
> naming of whatever it's polyfill for, except maybe for the namespace.
> Since MFBT has mozilla::UniquePtr instead of mozilla::unique_ptr, I
> had understood mozilla::UniquePtr as a long-term Gecko-specific
> implementation of the unique pointer concept as opposed to something
> that's expected to be replaced with std::unique_ptr as soon as
> feasible.
>
> Are we getting value out of going against the naming convention of the
> C++ standard library in order to enforce a Mozilla-specific naming
> style?
>
> I suggest that we start allowing snake_case C++ in m-c so that C++
> wrappers for the C interfaces to Rust code can be named with mere
> copy-paste of the Rust method names and so that we do

Re: Report your development frustrations via `mach rage`

2016-08-08 Thread Jim Blandy
I sincerely apologize to the list for my careless introduction of a
bikeshed debate. I will strive to do better in the future.

On Mon, Aug 8, 2016 at 2:04 PM, Armen Zambrano G. <arme...@mozilla.com>
wrote:

> I agree with the last suggestion. Words and context matter.
>
> On 2016-08-08 04:25 PM, Jim Blandy wrote:
>
>>  LOL, but honestly, one of the ways I get myself to treat people better is
>> to avoid the whole rage / tableflip / flame vocabulary when thinking about
>> what I want to do. Could we publicize this as "mach gripe", and leave
>> "rage" as an alias?
>>
>> On Mon, Aug 8, 2016 at 10:51 AM, Gregory Szorc <g...@mozilla.com> wrote:
>>
>> Sometimes when hacking on Firefox/Gecko you experience something that irks
>>> you. But filing a bug isn't appropriate or could be time consuming. You
>>> instead vent frustrations on IRC, with others around the figurative water
>>> cooler, or - even worse - you don't tell anyone.
>>>
>>> The Developer Productivity Team would like to know when you have a bad
>>> experience hacking on Firefox/Gecko so we can target what to improve to
>>> make your work experience better and more productive.
>>>
>>> If you update to the latest commit on mozilla-central, you'll find a new
>>> mach command: `mach rage`.
>>>
>>> `mach rage` opens a small web form where you can quickly express
>>> frustration about anything related to developing Firefox/Gecko. It asks a
>>> few simple questions:
>>>
>>> * Where did you encounter a problem
>>> * How severe was it
>>> * What was it
>>> * (optional) How can we fix it
>>>
>>> If you don't want to use `mach rage`, just load
>>> https://docs.google.com/forms/d/e/1FAIpQLSeDVC3IXJu5d33Hp_
>>> ZTCOw06xEUiYH1pBjAqJ1g_y63sO2vvA/viewform
>>>
>>> I encourage developers to vent as many frustrations as possible. Think of
>>> each form submission as a vote. The more times we see a particular pain
>>> point mentioned, the higher the chances something will be done about it.
>>>
>>> This form is brand new, so if you have suggestions for improving it, we'd
>>> love to hear them. Send feedback to g...@mozilla.com and/or
>>> jgrif...@mozilla.com.
>>>
>>> Happy raging.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
>
> --
> Zambrano Gasparnian, Armen
> Engineering productivity
> http://armenzg.blogspot.ca
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Report your development frustrations via `mach rage`

2016-08-08 Thread Jim Blandy
 LOL, but honestly, one of the ways I get myself to treat people better is
to avoid the whole rage / tableflip / flame vocabulary when thinking about
what I want to do. Could we publicize this as "mach gripe", and leave
"rage" as an alias?

On Mon, Aug 8, 2016 at 10:51 AM, Gregory Szorc  wrote:

> Sometimes when hacking on Firefox/Gecko you experience something that irks
> you. But filing a bug isn't appropriate or could be time consuming. You
> instead vent frustrations on IRC, with others around the figurative water
> cooler, or - even worse - you don't tell anyone.
>
> The Developer Productivity Team would like to know when you have a bad
> experience hacking on Firefox/Gecko so we can target what to improve to
> make your work experience better and more productive.
>
> If you update to the latest commit on mozilla-central, you'll find a new
> mach command: `mach rage`.
>
> `mach rage` opens a small web form where you can quickly express
> frustration about anything related to developing Firefox/Gecko. It asks a
> few simple questions:
>
> * Where did you encounter a problem
> * How severe was it
> * What was it
> * (optional) How can we fix it
>
> If you don't want to use `mach rage`, just load
> https://docs.google.com/forms/d/e/1FAIpQLSeDVC3IXJu5d33Hp_
> ZTCOw06xEUiYH1pBjAqJ1g_y63sO2vvA/viewform
>
> I encourage developers to vent as many frustrations as possible. Think of
> each form submission as a vote. The more times we see a particular pain
> point mentioned, the higher the chances something will be done about it.
>
> This form is brand new, so if you have suggestions for improving it, we'd
> love to hear them. Send feedback to g...@mozilla.com and/or
> jgrif...@mozilla.com.
>
> Happy raging.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko gdb pretty printers now understand nsTHashtable/friends

2016-07-18 Thread Jim Blandy
It warms the cockles of my heart to see people adding to the GDB
pretty-printers. :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ Core Guidelines

2016-07-15 Thread Jim Blandy
Given GSL's pedigree, I was assuming that we'd bring it in-tree and switch
out MFBT facilities with the corresponding GSL things as they became
available.

One of the main roles of MFBT is to provide polyfills for features
standardized in C++ that we can't use yet for toolchain reasons (remember
MOZ_OVERRIDE?); MFBT features get removed as we replace them with the
corresponding std thing. Why would Range vs. GSL span be any different?


On Fri, Jul 15, 2016 at 3:44 AM, Henri Sivonen  wrote:

> On Thu, Mar 24, 2016 at 6:01 PM, Jeff Muizelaar 
> wrote:
> > On Wed, Jan 6, 2016 at 7:15 AM, Henri Sivonen 
> wrote:
> >> On Thu, Oct 1, 2015 at 9:58 PM, Jonathan Watt  wrote:
> >>> For those who are interested in this, there's a bug to consider
> integrating
> >>> the Guidelines Support Library (GSL) into the tree:
> >>>
> >>> https://bugzilla.mozilla.org/show_bug.cgi?id=1208262
> >>
> >> This bug appears to have stalled.
> >>
> >> What should my expectations be regarding getting an equivalent of (at
> >> least single-dimensional) GSL span (formerly array_view;
> >> conceptually Rust's slice) into MFBT?
> >
> > Something like this already exits: mfbt/Range.h
>
> And we also have
> https://dxr.mozilla.org/mozilla-central/source/gfx/src/ArrayView.h
> whose comments say to prefer Range.
>
> ArrayView as well as GSL span use pointer and length while Range uses
> pointer and pointer past end.
>
> Are we happy enough with Range to the point where Range should be
> promoted in the codebase where the Core Guidelines would recommend
> span?
>
> (What to call it is, of course, a total bikeshed, but when the Core
> Guidelines are happening near the C++ standardization source of
> authority, it seems rather NIH-y to call it something other than
> "span" even if there are still compiler compat reasons [are there?]
> not to use Microsoft's span.h outright.)
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-07-07 Thread Jim Blandy
Yes, that's part of the standard expectations for std::move, and thus
mozilla::Move as well.


On Thu, Jul 7, 2016 at 6:53 AM, Gerald Squelart <squel...@gmail.com> wrote:

> On Monday, May 2, 2016 at 9:49:24 AM UTC+10, Jim Blandy wrote:
> > No, we don't know that [moved-from strings become empty].
> > The contract of a move in C++ is simply that the
> > source object is safe to destruct, but otherwise in an undefined state*.
> You
> > must not make any assumptions about its value.
> * (To be fair, in that context I think you were only arguing there that we
> can't assume anything about the value, but didn't explicitly rule out a
> reassignment.)
>
> Sorry to revive this old thread, but I've just noticed that our MFBT
> Swap() is written in terms of Move()'s:
> > template
> > inline void
> > Swap(T& aX, T& aY)
> > {
> >   T tmp(Move(aX));
> >   aX = Move(aY);
> >   aY = Move(tmp);
> > }
> This assumes that a moved-from object can safely be assigned to -- Which
> used to seem reasonable to me, but I got confused after our discussions
> here!
>
> Clang's std::swap does the same! (With extra checks that the type is move
> constructible)
>
> So can we assume that any moved-from object should be left in a state that
> is safe to destruct AND safe to be assigned to?
>
> But no assumption can be made about the actual contained value, i.e.: It
> could be the same, it could be empty, or it could be in some unsafe-to-use
> state until destroyed or overwritten.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Changing the toString result on DOM prototype objects

2016-06-06 Thread Jim Blandy
Well, the Debugger.Object.prototype.class getter shouldn't change, but
perhaps devtools shouldn't use it any more. Devtools should be displaying
objects in a way that doesn't surprise developers.

It seems to me the ideal behavior would be for Devtools to show objects in
the way that the ES6 standard toString methods would display them, and then
if those have been modified, attempt to display them as the modifications
suggest, to the extent that that can be done safely.

On Mon, Jun 6, 2016 at 9:23 AM, Nick Fitzgerald 
wrote:

> Yes (via the `Debugger.Object.prototype.class` getter) but unless I've
> misunderstood the scope of this proposal, the class name exposed by that
> getter should not change, only the `Object.prototype.toString.call(thing)`
> would change.
>
> On Mon, Jun 6, 2016 at 12:18 AM, Panos Astithas  wrote:
>
> > On Fri, Jun 3, 2016 at 8:21 PM, Nick Fitzgerald  >
> > wrote:
> >
> >> On Fri, Jun 3, 2016 at 8:41 AM, Boris Zbarsky  wrote:
> >>
> >> > Devtools bug: none so far, but maybe we need one?  Does devtools rely
> on
> >> > the JSClass name or Object.prototype.toString anywhere?
> >> >
> >>
> >> ​I think we are fine. There are certainly places where we use the
> >> `Object.prototype.toString.call(thing) === "[object Whatever]"`​ hack,
> but
> >> I don't see any instances that would be tripped up by these changes.
> >>
> >>
> >>
> https://dxr.mozilla.org/mozilla-central/search?q=path%3Adevtools+%22toString.call(%22=false
> >>
> >
> > Don't we still use the JSClass name in the variables view to indicate the
> > object type (reflected from Debugger.Object.prototype.class)?
> >
> > Panos
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All about crashes

2016-05-25 Thread Jim Blandy
On Wed, May 25, 2016 at 8:59 AM, Eric Rescorla  wrote:

> It's not a matter of defects versus non-defects. It's a matter of abnormal
> program
> termination versus non-termination.
>

Great clarification - thanks for explaining!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Requiring SSE2 on all 32-bit x86 OSs (was: Re: Reverting to VS2013 on central and aurora)

2016-05-18 Thread Jim Blandy
On Wed, May 18, 2016 at 9:51 AM, Ralph Giles  wrote:

> On Wed, May 18, 2016 at 3:54 AM, Mike Hommey  wrote:
>
> > Now, with my Debian hat on, I can tell you with 100% certainty that
> > angry Debian users *will* come with patches and will return even
> > angrier if patches are not even welcome.
>

Since this paragraph is getting quoted a lot: I *believe* Mike's point was
only that this could be counted on to occur --- not that it should
influence the decision much. At Mozilla's scale, obviously, tiny minorities
of our users can still be thousands strong.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Requiring SSE2 on all 32-bit x86 OSs (was: Re: Reverting to VS2013 on central and aurora)

2016-05-18 Thread Jim Blandy
I think we need to admit that there isn't any rational, analytical way to
compare most of the costs here. The one number we *do* have, the number of
users who can't upgrade, is kind of tantalizing us, but we can't quantify
how many users we'll gain by requiring SSE2, how many other bugs we'll fix
because engineers don't need to worry about old CPUs, and so on.

When you can't be rational, often the next best thing is to be fair. For
example, our choice to support ESRs for a year is a similar arbitrary
choice: we don't re-evaluate who will be affected each time an ESR is about
to expire. We just announce a policy in advance that is useful to a large
number of people, and then follow it.

One analogous approach here would be to simply decide not to support CPUs
sold new more than N years ago.



On Wed, May 18, 2016 at 8:50 AM, Tobias B. Besemer <
tobias.bese...@googlemail.com> wrote:

> Am Mittwoch, 18. Mai 2016 16:52:25 UTC+2 schrieb Boris Zbarsky:
> > On 5/18/16 7:38 AM, Tobias B. Besemer wrote:
> > > Is this really a discussion if Firefox should support CPUs older then
> 13-15 years ???
> >
> > More or less, yes.
> >
> > > I can't imagine any scenario were a user needs to run a Pentium III
> with GUI and a browser on it...
> >
> > There were AMD CPUs newer than that without SSE2.
> >
> > But more importantly, we have concrete evidence, via crash-stats, that
> > such users exist, in small amounts.  So the theoretical "I can't imagine
> > why anyone would do it" argument runs into the experimental "these
> > people clearly exist" issue.
> >
> > -Boris
>
> I wrote 13-15 years, because Intel did it 15 years ago and AMD 13 years
> ago.
>
> Crash-stats with FF >40?
>
> There was ~1 year ago a request at Avira to support non-SSE2 again with
> there scanner again...
> AFAIKR I wrote to it, that the user should have a look into BIOS if there
> is a SSE2 support that can be turned on, because I can imagine that this
> was a long time optional and e.g. after a BIOS-Reset it was turned off...
> Think there came never any answer back in the Feedback-Community...
>
> Is it possible in the stats to see, if the systems _should_ support it?
> (E.g. what kind of CPU is used by the system...)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reverting to VS2013 on central and aurora

2016-05-13 Thread Jim Blandy
Since we didn't require SSE2 in 32-bit builds until this point, were
floating-point results rounded unpredictably in those builds until now?


On Fri, May 13, 2016 at 1:42 PM, Benjamin Smedberg 
wrote:

> I am talking about requiring SSE2. That is a larger (but still quite small)
> population, but the upside of being able to turn on SSE2 optimizations by
> default is an important benefit. I've discussed and confirmed this with
> Firefox product management.
>
> So yes, the plan of record is to require SSE2 starting in Firefox 49, and I
> will update the tracking bugs to reflect that.
>
> --BDS
>
> On Fri, May 6, 2016 at 1:17 PM, Gregory Szorc  wrote:
>
> > On Fri, May 6, 2016 at 9:39 AM, Benjamin Smedberg  >
> > wrote:
> >
> >> I agree that we should drop support for non-SSE2. It mattered 7 years
> ago
> >> (see https://bugzilla.mozilla.org/show_bug.cgi?id=500277) but it really
> >> doesn't matter now.
> >>
> >
> > Wait - are we talking about requiring SSE or SSE2? The thread up to this
> > point was talking about requiring just SSE, not SSE2. I just want to make
> > sure we're on the same page since according to mhoye's post the non-SSE2
> > population is ~25x larger than the non-SSE population...
> >
> >
> >>
> >> We do need to avoid updating these users to a build that will crash, and
> >> do the same "unsupported" messaging we're doing for old versions of
> MacOS.
> >> Gregory, will you own that? You will probably need to add CPU feature
> >> detection to the update URL/params for 47, or use some kind of system
> addon
> >> to shunt these users off the main update path.
> >>
> >
> > Given that 47 is in Beta, is it too late/risky to make this change on
> that
> > channel? Should we revert to VS2013 on Aurora/48 and make the updater
> > modifications on that channel? I think this will have minimal negative
> > impact, as most of the impact to changing toolchains would be on central,
> > as that is where most developers and automation live.
> >
> >
> >>
> >> On Fri, May 6, 2016 at 10:10 AM, Mike Hoye  wrote:
> >>
> >>> On 2016-05-06 12:26 AM, Gregory Szorc wrote:
> >>>
>  FWIW, the crashes we've seen so far are from incorrectly emitted movss
>  instructions. This instruction is part of the original SSE instruction
>  set,
>  which was initially unveiled by Intel on the Pentium 3 in 1999 and
>  later by
>  AMD on the Duron and Athlon XP in 2000-2001. I'm not sure why we still
>  need
>  Firefox to run on processors manufactured in the 90s.
> 
> >>> Per an IRC conversation with chutten, Firefox users on CPUs that do not
> >>> support SSE are 0.015% of our user base. (compared to 0.4% for
> no-SSE2). A
> >>> third of those are on otherwise-unsupported configurations (pre-SP3 XP,
> >>> etc), this work provides continuity of support to 0.01% of our users.
> >>>
> >>> - mhoye
> >>>
> >>>
> >>> 09:59  So, to put it clearly and precisely, of the Firefox
> >>> Population in release and beta who are reporting at least base
> telemetry
> >>> collection on machines running supported configurations, only 0.01%
> cannot
> >>> definitively say they have SSE.
> >>> 10:00  (according to a 1% random sample as stored in the
> >>> longitudinal dataset)
> >>>
> >>> ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
> >>>
> >>
> >>
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can MAX_REFLOW_DEPTH be increased?

2016-05-02 Thread Jim Blandy
Would it be feasible to rewrite the recursive code to be iterative, and
keep state in an explicit data structure? That would make it much easier to
keep the behavior predictable and consistent across platforms.


On Mon, May 2, 2016 at 10:34 AM, L. David Baron  wrote:

> On Monday 2016-05-02 10:07 -0700, Bobby Holley wrote:
> > This might be helpful:
> >
> http://mxr.mozilla.org/mozilla-central/source/js/xpconnect/src/XPCJSRuntime.cpp#3440
> >
> > I can't vouch 100% for its accuracy, but it's probably pretty close.
> >
> > In general, dynamic stack checks (measuring the top of the stack at XPCOM
> > startup, and comparing it with the stack at the point of interest) seem
> > preferable to hard-coding number-of-recursive-calls, since it doesn't
> > depend on the size of stack frames, which may drift over time. We can't
> do
> > this for JS (see the comments surrounding the MXR link above), but I bet
> we
> > could for layout.
>
> We already have some code that could be improved to do stuff like
> this (see nsFrame::IsFrameTreeTooDeep and
> NS_FRAME_TOO_DEEP_IN_FRAME_TREE), but we'd need to add checks in
> other places (particularly frame construction, which is also
> recursive), and we'd also need to make sure that hitting these
> conditions kept things in a safe state and didn't cause security
> bugs like https://bugzilla.mozilla.org/show_bug.cgi?id=619021 .
> This probably requires thorough testing of any such code.
>
> -David
>
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-05-01 Thread Jim Blandy
On Fri, Apr 29, 2016 at 4:43 PM, Gerald Squelart  wrote:

> For example, we know how strings behave when moved from* (the original
> becomes empty), and it'd be nice to be able to use that trick when possible
> and really needed.
>

No, we don't know that. The contract of a move in C++ is simply that the
source object is safe to destruct, but otherwise in an undefined state. You
must not make any assumptions about its value.

It is not always the case that the fastest move implementation leaves the
source empty. For example, if the string is using inline storage, then a
move would need to take extra steps to clear the original.

You write about "us[ing] that trick when possible and really needed", when
what you're actually saying is "let's depend on undefined behavior." That
approach is common, and its history is not pretty.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-05-01 Thread Jim Blandy
What are the distributions of memory and flash sizes for the devices people
currently run Fennec on? It'll be almost impossible to have a good
discussion about Fennec size without those numbers. I seem to remember that
is data we felt was okay to collect.





On Sun, May 1, 2016 at 2:21 PM, Boris Zbarsky  wrote:

> On 4/29/16 11:30 AM, sn...@snorp.net wrote:
>
>> The Fennec team has been very clear about why they oppose inclusion of
>> ICU in bug 1215247.
>>
>
> Sort of.  There's been a fair amount of moving of goalposts to get from
> https://bugzilla.mozilla.org/show_bug.cgi?id=1215247#c14 to
> https://bugzilla.mozilla.org/show_bug.cgi?id=1215247#c43 as far as I can
> tell.
>
> I sympathize with the Fennec team's position here: The amount of code in
> libxul keeps growing (not always by as little as possible, I agree!) as we
> add support for more stuff the web is coming to depend on, but some of the
> features being added are perhaps not a big deal in the markets that want a
> small APK download.  It's not clear to me who (if anyone) knows what
> features these are; clearly the JS Intl API (yes, not the only reason to
> include ICU) is one of them, but are there others we've identified?
>
> Of course https://bugzilla.mozilla.org/show_bug.cgi?id=1215247#c43 more
> or less flat-out disagrees with the suggestion that we should have fewer
> Gecko configurations, on a much broader front than ICU support...
>
> I know we have places where we use more space than we should in Gecko, and
> in particular some places where we have traded off space for speed by
> having largish static data tables instead of more dynamic checks... not to
> mention having static bindings code instead much smaller dynamic XPConnect
> code.  This tradeoff was very conscious, akin to Fennec's decision to not
> compress .so, but may have been the wrong one for Fennec in practice.
>
> If we, as an organization, really want to try to reduce the size of the
> Fennec APK, and are actually willing to put platform resources into it
> (which requires either hiring accordingly or starving other goals, in the
> usual way), then we should do that.  So far I've unfortunately seen
> precious little willingness to staff such an effort appropriately.  :(
>
> This type of attitude is why we have people in the Firefox org wanting to
>> axe Gecko.
>>
>
> For the Android case, I expect the only viable replacement that hits the
> desired size limits would be an iOS-like solution, right?  That is, a UI
> using whatever browser engine is already installed on the device?
>
> Just to be clear as to what our real alternatives are here.
>
> The engineers in Platform consistently want to dismiss mobile-specific
>> issues
>>
>
> I think you're painting with a _very_ broad brush here.
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-04-28 Thread Jim Blandy
On Thu, Apr 28, 2016 at 8:22 PM, <jww...@mozilla.com> wrote:

> Jim Blandy於 2016年4月28日星期四 UTC+8下午1時51分15秒寫道:
> > I don't really think it's a good example. TakeMediaIfKnown is accepting a
> > UniquePtr as an inout parameter: it uses, and may modify, its
> > value. It should take UniquePtr &.
> IIUC, UniquePtr can't be an inout parameter for its unique semantics
> which owns sole ownership of an object. The caller won't be able to use the
> object in a meaningful way after the function returns.
>
>
>
I'm not sure I understand. Maybe it would help if we had a more concrete
example to talk about:

$ cat unique-inout.cc
#include 
#include "mozilla/UniquePtr.h"

using mozilla::UniquePtr;

struct MediaFile {
  const char *name;
  MediaFile(const char *name) : name(name) { printf("constructing %s\n",
name); }
  ~MediaFile() { printf("destructing %s\n", name); }
};

int foo(UniquePtr , bool pleaseSwap)
{
  if (pleaseSwap) {
UniquePtr ptr = Move(arg);
arg = UniquePtr(new MediaFile("foo's"));
  }
}

int main(int argc, char **argv) {
  UniquePtr first(new MediaFile("first"));
  printf("before first call\n");
  foo(first, false);
  printf("after first call\n");

  UniquePtr second(new MediaFile("second"));
  printf("before second call\n");
  foo(second, true);
  printf("after second call\n");

}
$ ln -sf /home/jimb/moz/dbg/mfbt mozilla
$ g++ -std=c++11 -I . unique-inout.cc -o unique-inout
$ ./unique-inout
constructing first
before first call
after first call
constructing second
before second call
constructing foo's
destructing second
after second call
destructing foo's
destructing first
$

The first MediaFile's destructor doesn't run until the end of main. The
second MediaFile's destructor runs during the second call to foo, and then
foo's MedialFile's destructor runs at the end of main.

That's what I meant by a function taking a UniquePtr as an inout parameter.
It seemed to me like the function Gerald imagined should be written as in
my code above, rather than passing Move(...) as the argument. Although the
language doesn't enforce it, I think Move should be reserved for
unconditional transfers of ownership.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-04-27 Thread Jim Blandy
On Wed, Apr 27, 2016 at 8:02 PM, Ehsan Akhgari 
wrote:

> Hmm, OK, this is a good example.  :-)
>
> Even though Xidorn's suggestion may work in some cases, I can imagine
> that in other cases you wouldn't want the caller to have to know the
> preconditions of calling TakeMedia.
>
>
I don't really think it's a good example. TakeMediaIfKnown is accepting a
UniquePtr as an inout parameter: it uses, and may modify, its
value. It should take UniquePtr &.

UniquePtr.h disagrees with me:

 * ...  To conditionally transfer
 * ownership of a resource into a method, should the method want it, use a
 * |UniquePtr&&| argument.

It looks to me as if C++'s version of move semantics is a spiffy solution
to some gnarly problems with containers, but Rust shows that there's
something much more fundamental going on with moves, which C++'s version
doesn't capture very well.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Dump GC's retaining paths for any GC thing while debugging with JS::ubi::dumpPaths

2016-04-25 Thread Jim Blandy
Could you show a sample patch that uses this?

On Mon, Apr 25, 2016 at 10:19 AM, Nick Fitzgerald 
wrote:

> Hi everyone!
>
> Friendly PSA: sometimes you're debugging a "leak" where the GC considers
> something reachable and therefore won't collect it, and this happens at an
> inopportune time for using the devtools memory panel (eg right before a
> DESTROY_RUNTIME collection), so you can't use the nice GUI for visualizing
> the GC's retaining paths.
>
> Fear not! You can use `JS::ubi::dumpPaths` to log retaining paths of any GC
> thing from within GDB, or you can compile it in as you might do with a
> tactically placed printf.
>
> The signature is `void JS::ubi::dumpPaths(JSRuntime* rt, JS::ubi::Node
> node, maxRetainingPaths = 10)`. The `rt` should be the runtime that the
> thing belongs to, `node` is the thing (JS::ubi::Node constructs from raw GC
> pointers as well as Rooted and Handle), and `maxRetainingPaths` is the
> number of retaining paths to dump.
>
> Include the "js/UbiNodeShortestPaths.h" header to get `JS::ubi::dumpPaths`.
>
> Happy bug hunting!
>
> Example output:
>
> Path 0:
> 0x7fff5fbfec10 JS::ubi::RootList
> |
> |
> ''
> |
> V
> 0x115c49d80 JSObject
> |
> |
> 'shape'
> |
> V
> 0x12ac4db50 js::Shape
> |
> |
> 'base'
> |
> V
> 0x116244e70 js::BaseShape
> |
> |
> 'ShapeTable shape'
> |
> V
> 0x116289120 js::Shape
> |
> |
> 'getter'
> |
> V
> 0x11627b3a0 JSObject
> |
> |
> 'private'
> |
> V
> 0x11855d940 JSObject
> |
> |
> 'script'
> |
> V
> 0x113290bf0 JSScript
> |
> |
> 'sourceObject'
> |
> V
> 0x1132763c0 JSObject
>
> Full output: https://pastebin.mozilla.org/8868795
> ___
> dev-developer-tools mailing list
> dev-developer-to...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-developer-tools
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-25 Thread Jim Blandy
On Mon, Apr 25, 2016 at 4:03 AM, Jean-Yves Avenard 
wrote:

> I don't know how popular this method would be, nor if people would be
> shocked by providing a operator bool() but here it is :)
>
>
Usually, the people most shocked by providing an operator bool() are those
who find the type silently participating in arithmetic expressions...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-20 Thread Jim Blandy
The pattern seems reasonable enough.

DXR says we have 2579 "init" or "Init" functions, and 9801 callers of such
functions. I used:

function:init ext:cpp ext:h
callers:init ext:cpp ext:h

Do you propose we make an effort to fix up existing code, or just introduce
this as the preferred pattern in new code?


On Wed, Apr 20, 2016 at 6:07 PM, Nicholas Nethercote  wrote:

> Hi,
>
> C++ constructors can't be made fallible without using exceptions. As a
> result,
> for many classes we have a constructor and a fallible Init() method which
> must
> be called immediately after construction.
>
> Except... there is one way to make constructors fallible: use an |nsresult&
> aRv| outparam to communicate possible failure. I propose that we start
> doing
> this.
>
> Here's an example showing stack allocation and heap allocation. Currently,
> we
> do this (boolean return type):
>
>   T ts();
>   if (!ts.Init()) {
> return NS_ERROR_FAILURE;
>   }
>   T* th = new T();
>   if (!th.Init()) {
> delete th;
> return NS_ERROR_FAILURE;
>   }
>
> or this (nsresult return type):
>
>   T ts();
>   nsresult rv = ts.Init();
>   if (NS_FAILED(rv)) {
> return rv;
>   }
>   T* th = new T();
>   rv = th.Init();
>   if (NS_FAILED(rv)) {
> delete th;
> return rv;
>   }
>
> (In all the examples you could use a smart pointer to avoid the explicit
> |delete|. This doesn't affect my argument in any way.)
>
> Instead, we would do this:
>
>   nsresult rv;
>   T ts(rv);
>   if (NS_FAILED(rv)) {
> return rv;
>   }
>   T* th = new T(rv);
>   if (NS_FAILED(rv)) {
> delete th;
> return rv;
>   }
>
> For constructors with additional argument, I propose that the |nsresult&|
> argument go last.
>
> Using a bool outparam would be possible some of the time, but I suggest
> always
> using nsresult for consistency, esp. given that using bool here would be no
> more concise.
>
> SpiderMonkey is different because (a) its |operator new| is fallible and
> (b) it
> doesn't use nsresult. So for heap-allocated objects we *would* use bool,
> going
> from this:
>
>   T* th = new T();
>   if (!th) {
> return false;
>   }
>   if (!th.Init()) {
> delete th;
> return false;
>   }
>
> to this:
>
>   bool ok;
>   T* th = new T(ok);
>   if (!th || !ok) {
> delete th;
> return false;
>   }
>
> These examples don't show inheritance, but this proposal works out
> straightforwardly in that case.
>
> The advantages of this proposal are as follows.
>
> - Construction is atomic. It's not artificially split into two, and
> there's no
>   creation of half-initialized objects. This tends to make the code nicer
>   overall.
>
> - Constructors are special because they have initializer lists -- there are
>   things you can do in initializer lists that you cannot do in normal
>   functions. In particular, using an Init() function prevents you from
> using
>   references and |const| for some members. This is bad because references
> and
>   |const| are good things that can make code more reliable.
>
> - There are fewer things to forget at call sites. With our current
> approach you
>   can forget (a) to call Init(), and (b) to check the result of
> Init(). With this
>   proposal you can only forget to check |rv|.
>
> The only disadvantage I can see is that it looks a bit strange at first.
> But if
> we started using it that objection would quickly go away.
>
> I have some example patches that show what this code pattern looks like in
> practice. See bug 1265626 parts 1 and 4, and bug 1265965 part 1.
>
> Thoughts?
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why do we still need to include Qt widget in mozilla-central?

2016-04-18 Thread Jim Blandy
Where is this work taking place? Would it be possible for you to work
directly in mozilla-central?

Looking at the hg history of the the widget/qt subdirectory, all the
changes I see there are Masayuki updating it for changes elsewhere, and
people making tree-wide changes that have nothing to do with Qt.

So, like any other directory would, this code is creating work for other
engineers. It needs to have someone responsible for it who actually cares
whether it works or not.

According to the module list at:

https://wiki.mozilla.org/Modules/All

The owner of widget/qt is Oleg Romashin, and the peers are Wolfgang
Rosenauer and Doug Turner. Is this information current?




On Sun, Apr 17, 2016 at 8:12 AM, Raine Mäkeläinen <
raine.makelai...@gmail.com> wrote:

> I think that in this context we are talking about mozilla/widget/qt/*
> components and yes we're using those in our Gecko build. We don't use
> QWidgets for Sailfish Browser. User interface of the Sailfish Browser is
> written with Qt QML.
>
> There is more info in the embedding wiki [1] and Dmitry's blog [2].
> Rendering pipeline has changed after Dmitry's blog post but otherwise quite
> close to the current state.
>
> [1]
> https://wiki.mozilla.org/Embedding/IPCLiteAPI
>
> [2]
> http://blog.idempotent.info/posts/whats-behind-sailfish-browser.html
>
> -Raine
>
> 2016-04-14 20:38 GMT+03:00 Henri Sivonen <hsivo...@hsivonen.fi>:
>
>> Added Raine Mäkeläinen, who has been committing to qtmozembed lately, to
>> CC.
>>
>> On Thu, Apr 14, 2016 at 1:51 AM, Jim Blandy <jbla...@mozilla.com> wrote:
>> > On Tue, Apr 12, 2016 at 4:27 AM, Henri Sivonen <hsivo...@hsivonen.fi>
>> wrote:
>> >>
>> >> On Tue, Apr 12, 2016 at 7:45 AM, Masayuki Nakano <
>> masay...@d-toybox.com>
>> >> wrote:
>> >> > So, my question is, why do we still have Qt widget in
>> mozilla-central?
>> >> > What
>> >> > the reason of keeping it in mozilla-central?
>> >>
>> >> My understanding is that
>> >> https://git.merproject.org/mer-core/qtmozembed/ still uses it. As we
>> >> are figuring out how to be more embeddable (see
>> >> https://medium.com/@david_bryant/embed-everything-9aeff6911da0 ), it's
>> >> probably a bad time to make life hard for an existing embedding
>> >> solution.
>> >
>> >
>> > This doesn't really answer the question. We can't have code in tree that
>> > isn't tested, and isn't used, and has nobody responsible for it.
>> >
>> > If someone is willing to fix it up and get it tested and included in the
>> > continuous integration process, then that's fine. But "someone might
>> want to
>> > use it in the future" can't possibly be a legit reason to keep
>> substantial
>> > bits of code in the tree.
>>
>> It looked to me like the code is being used *now*.
>>
>> Raine, does qtmozembed use the Qt widget code from mozilla-central?
>>
>> --
>> Henri Sivonen
>> hsivo...@hsivonen.fi
>> https://hsivonen.fi/
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Cancel your old Try pushes

2016-04-15 Thread Jim Blandy
On Fri, Apr 15, 2016 at 12:36 PM, Jonas Sicking  wrote:

> We could also make the default behavior be to cancel old pushes. And
> then enable push message syntax for opting in to not cancelling.
>
>
This could be very frustrating (and cause farm work to be wasted) if it
happened accidentally.

Perhaps it would be less error-prone to require an explicit choice of
overlapping or cancellation, and immediately reject pushes that haven't
chosen one or the other, for bugs that already have running try pushes.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why do we still need to include Qt widget in mozilla-central?

2016-04-13 Thread Jim Blandy
On Tue, Apr 12, 2016 at 4:27 AM, Henri Sivonen  wrote:

> On Tue, Apr 12, 2016 at 7:45 AM, Masayuki Nakano 
> wrote:
> > So, my question is, why do we still have Qt widget in mozilla-central?
> What
> > the reason of keeping it in mozilla-central?
>
> My understanding is that
> https://git.merproject.org/mer-core/qtmozembed/ still uses it. As we
> are figuring out how to be more embeddable (see
> https://medium.com/@david_bryant/embed-everything-9aeff6911da0 ), it's
> probably a bad time to make life hard for an existing embedding
> solution.
>

This doesn't really answer the question. We can't have code in tree that
isn't tested, and isn't used, and has nobody responsible for it.

If someone is willing to fix it up and get it tested and included in the
continuous integration process, then that's fine. But "someone might want
to use it in the future" can't possibly be a legit reason to keep
substantial bits of code in the tree.

Mercurial will keep all those sources around for perpetuity, so nothing is
ever really deleted; but we don't need to have it included in tip.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: [Bug 1224726] High memory consumption when opening and searching a large Javascript file in debugger.

2016-04-11 Thread Jim Blandy
Has anyone contacted the commenter and found out what he's trying to do?

I've received bugmail for a lot of comments like the one below today; I
assume others have too. They're not very helpful, especially given their
length. It seems like he's intending to be helpful, but got some bad
advice. The "bugday" reference makes me think he might have been
participating in a Mozilla event. It would be sad if he was attempting to
contribute to the project and ended up wasting both his own and our time.

-- Forwarded message --
From: Bugzilla@Mozilla 
Date: Sun, Apr 10, 2016 at 9:15 AM
Subject: [Bug 1224726] High memory consumption when opening and searching a
large Javascript file in debugger.
To: j...@mozilla.com


*Comment # 21  on
Bug 1224726  from
Mayur Patil  at 2016-04-10 09:15:44 PDT *

[bugday-20160323]

Status: RESOLVED,FIXED -> UNVERIFIED

Comments:
STR:
File is not present on Dropbox.

Component:
Name Firefox
Version 46.0b9
Build ID 20160322075646
Update Channel beta
User Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101
Firefox/46.0
OS  Windows 7 SP1 x86_64

Expected Results:
Memory based test depends on index.html

Actual Results:
No zip file has found on dropbox.


--
Product/Component: Firefox :: Developer Tools: Debugger
--
*Tracking Flags:*

   - status-firefox45:affected
   - status-firefox46:fixed

--
*You are receiving this mail because:*

   - You are watching the component for the bug.

X-Bugzilla-Reason: None
X-Bugzilla-Type: changed
X-Bugzilla-Watch-Reason: Component-Watcher
X-Bugzilla-Classification: Client Software
X-Bugzilla-ID: 1224726
X-Bugzilla-Product: Firefox
X-Bugzilla-Component: Developer Tools: Debugger
X-Bugzilla-Version: 45 Branch
X-Bugzilla-Keywords:
X-Bugzilla-Severity: normal
X-Bugzilla-Who: ram.nath241...@gmail.com
X-Bugzilla-Status: RESOLVED
X-Bugzilla-Resolution: FIXED
X-Bugzilla-Priority: P1
X-Bugzilla-Assigned-To: bgrinst...@mozilla.com
X-Bugzilla-Target-Milestone: Firefox 46
X-Bugzilla-OS: Windows 8.1
X-Bugzilla-Changed-Fields: Comment Created
X-Bugzilla-Changed-Field-Names: comment
X-Bugzilla-URL: https://bugzilla.mozilla.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: emacs M-x gdb and mach --debug

2016-04-08 Thread Jim Blandy
I don't know of a good way to make mach stop chatting in the middle of
GDB's serious conversation with Emacs. It sure would be nice to fix this.

Note that Emacs sets the "EMACS" environment variable to "t" in the GDB
subprocess. Perhaps mach could use this as a way to automatically behave
correctly.

On Fri, Apr 8, 2016 at 10:55 AM, Kyle Machulis 
wrote:

> https://bugzilla.mozilla.org/show_bug.cgi?id=1254313
>
> I've had this filed for a while now, just need to do something about it. :/
>
> On Fri, Apr 8, 2016 at 9:13 AM, Andreas Farre  wrote:
>
> > Looking for someone emacs savvy to help me with running mach --debug as a
> > command for M-x gdb. Currently there is an issue with mach logging a bit
> to
> > much which screws up the GDB/MI communication with emacs. Has anyone
> > already some solution for this? Otherwise I got it working by adding a
> > quiet flag to mach --debug that changes the log level to not log
> > informative messages, is this the "right" way?
> >
> > farre
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jim Blandy
The bug is surprising, in that it claims that the bytecode that consumes
the value determines whether a warning is issued (SETLOCAL;CALL), rather
than the bytecode doing the fetch.

Is that the intended behavior? I can't see how that makes much sense.
On Dec 19, 2014 2:55 PM, David Rajchenbach-Teller dtel...@mozilla.com
wrote:

 I am going to suggest, once again, that warnings generally noise and
 should be replaced by actionable errors, at least when the code is
 executed in a test suite.

 See
 https://groups.google.com/forum/#!topic/mozilla.dev.platform/gqSIOc5b-BI

 Cheers,
  David

 On 19/12/14 21:19, Jason Orendorff wrote:
  So if you go to about:config and set the javascript.options.strict pref,
  you'll get warnings about accessing undefined properties.
 
  js Math.TAU
  undefined
  /!\ ReferenceError: reference to undefined property Math.TAU
 
  (It says ReferenceError, but your code still runs normally; it really
 is
  just a warning.)
 
  Is anyone using this? Bug 1113380 points out that the rules about what
 kind
  of code can cause a warning are a little weird (on purpose, I think).
 Maybe
  it's time to retire this feature.
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=1113380
 
  Please speak up now, if you're still using it!
 
  -j
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 


 --
 David Rajchenbach-Teller, PhD
  Performance Team, Mozilla


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jim Blandy
On Fri, Dec 19, 2014 at 2:22 PM, Nick Fitzgerald nfitzger...@mozilla.com
wrote:

 I generally don't find them useful, but instead annoying, and that they
 provide a lot of noise to filter out to find actual relevant errors. This
 is including the undefined property errors. It is a common JS style to pass
 around configuration/option objects that will be missing many properties
 that get accessed by functions they are passed to. My understanding is that
 part of this is special cased not to generate these messages, but in my
 experience it isn't close to enough.


In a recent message, Jason explained that code that tests for the presence
of a property:

  if (obj.prop)
  if (obj.prop === undefined)
  if (obj.prop  ...)

are not supposed to generate warnings.

Do you find that you're getting these warnings from code that has one of
those forms?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-24 Thread Jim Blandy

On 03/23/2014 12:17 AM, Boris Zbarsky wrote:

Say we have this:

  observerService.addSettingObserver(data-changed, obj, cache, null)

and someone sets a data-changed notification.  If I understand your
proposal correctly, that will do some equivalent of obj.cache = null,
assuming obj is still alive, right?


Yes, that's right.


But doing obj.cache = null can in fact invoke an arbitrary setter on
obj (and if we use defineProperty, we run into proxies being able to
intercept even that).  I guess we could require that obj is a
non-proxy object which has cache as an own non-configurable writable
data property to remove that hazard?


Yes, we would have to do something like that, to ensure code doesn't run.

The crucial point is: there's no construct that can completely 
substitute for weak references that does not itself expose GC 
non-determinism. So we must choose some less powerful construct that 
still gets the job done. These setter observers are one such; I'd 
argue that WeakMaps are another example of a deterministic interface 
that handles many cases that people used to want weak references for.


But replacing a powerful facility with something, um, weaker, entails 
looking back over the details of the problem to see what might suffice. 
Sometimes just zapping an outdated cache is good enough; but sometimes 
it isn't, so you look for some other limited-but-deterministic construct.


It's worth noting that we're not really *avoiding* GC-sensitivity; we're 
just *confining* it. WeakMaps have a very GC-entangled implementation 
behind their deterministic interface. And my toy setter observers 
proposal needs to know about finalization to remove the observers. But 
their visible behavior is deterministic.


I'm hoping that, having considered the specifics, the use cases will 
fall into two categories:


1) Those where there's some point before finalization at which observers 
/ listeners can be removed.


2) Those where a limited-but-deterministic construct could suffice. 
These we can then implement in C++, and avoid exporting any new 
non-deterministic features to JS. This could require some luck and 
creative thinking.


It would be a shame if there were some that fall into the third category:

3) Those where one must run involved application-specific logic at 
finalization time, never sooner.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-24 Thread Jim Blandy

On 03/23/2014 10:56 PM, Steve Fink wrote:

Anyway, with your specific example, it seems to me that the problem is
that you're losing information. The popups need the main window to
communicate with each other, and the main window needs all of its stuff
to work while it's open. The solution, then, seems like it would be to
decouple the communication mechanism from the main window. If the object
graph representing the communication mechanism were separated out from
the rest of the main window, so that there are no outgoing edges from
the communication piece to the rest of the main window, then you
wouldn't need weak refs.


Right! Please add this to my list as acceptable solution class 0): 
restructure your graph so that the GC edges actually reflect the 
ownership relations you need.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-24 Thread Jim Blandy

On 03/24/2014 12:55 AM, Jason Orendorff wrote:

and blow that whole window out the air lock.


Actually, we nuke it from orbit.

It's the only way to be sure.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-23 Thread Jim Blandy

On 03/22/2014 10:43 PM, K. Gadd wrote:
 I'm confused about how this would work. who's observing what? How can
 obj be collected if you're passing it as an argument? This looks like
 a synchronous property set passed through an unnecessary intermediary.

Sorry, my code example was confusingly incomplete. Let me try again:

Some observers simply need to mark some computed state as no longer 
being up-to-date (marking a cached result as invalid, say), and it's 
fine - preferable, perhaps - for this data to be recomputed the next 
time it's requested, not each time the observer is notified. In that 
case, a special, limited kind of observer could be enough:


observerService.addSettingObserver(state-changed, obj, dirty, true)

or

observerService.addSettingObserver(data-changed, obj, cache, null)

where the data-changed notification tells us that some underlying data 
has changed, and obj has cached the result of some computation that 
depends on that data in its 'cache' property. Sending a data-changed 
notification zaps obj's cache.


This is certainly a lot less general than a callback function! But in 
those cases where it is good enough, the big advantage is that 
observerService can use weak references or finalization internally to 
remove the observer hen obj goes away, and the effect is completely 
invisible to its customers. From the outside, it is a completely 
deterministic API.


My general point is: In addition to looking for events other than 
finalization to pin cleanups on, we should also look into facilities 
less general than callbacks that present deterministic APIs, even if 
they use GC-sensitive facilities internally.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-23 Thread Jim Blandy

On 03/22/2014 11:36 PM, Boris Zbarsky wrote:

On 3/23/14 2:21 AM, Jim Blandy wrote:

See my slightly longer explanation in the previous message. The
advantage over passing true for ownsWeak is that my proposed API is
completely deterministic.


I'm not sure I follow  The current setup in the observer service is
also completely deterministic except in terms of the amount of memory
and CPU is uses, no?


I hope we're not talking past each other... the visible behavior of

Services.obs.addObserver(glurph, () = { alert(Glurph!); }, true)

(pretending that the function supported nsIWeakReference) depends on 
when the GC notices the function is garbage. No?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-22 Thread Jim Blandy
Perhaps in some cases weaker, more manageable mechanisms than 
full-powered observers and listeners could be sufficient.


For example, one approach which gets you the right cleanup behavior 
without exposing GC, is to have special-case observers which can be 
easily proven to be safe to drop. For example, suppose we had:


observerService.addSettingObserver(obj, dirty, true)

which promises to set obj's 'dirty' property to true. If obj is 
collected, this observer can obviously be dropped.


I understand that not every use case can be handled this way; it's 
simply not as powerful as calling a function. But certainly every case 
that *can* be handled this way should.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-22 Thread Jim Blandy

On 03/19/2014 04:39 PM, Kyle Huey wrote:

Short of not implementing things in JS, what ideas do people have for
fixing these issues?  We have some ideas of how to add helpers to
scope these things to the lifetime of the window (perhaps by adding an
API that returns a promise that is resolved at inner-window-destroyed
to provide a good cleanup hook that is not global) but that doesn't
help with objects intended to have shorter lifetimes.  Is it possible
for us to implement some sort of useful weak reference in JS?


There is a very long history of systems much older than Firefox trying 
to use the GC to tell them how to manage behaviors other than actual 
object allocation, and being unsatisfied with the results. We've really 
got to find some other event to pin the behavior on.


Here's what our Allen Wirfs-Brock, editor of the ECMAScript standard, 
who's been doing dynamic languages since forever, posted to 
moz.dev.tech.js-engine.internals (2013-11-2):




My experience is that Terrence is absolutely correct in this regard and 
that this position is share by virtually all experienced GC 
implementors. A former colleague of mine, George Bosworth, expressed it 
this way in an experience report at a ISMM a number of years ago:


A modern GC is a heuristics-based resource manager.  The resources it 
manages generally have very low individual value (a few dozen bytes of 
memory) and exist in vast numbers.  There is a wide distribution of 
life-times of these resources, but the majority are highly ephemeral. 
The average latency of resource recovery is important but the recovery 
latency of any individual resource is generally unimportant. The 
heuristics of a great GC take all of these characteristics into account. 
 When you piggy-back upon a GC  (via finalization, or equivalent 
mechanism) the management of a different kind of resource you are 
applying the heuristic of memory resource management to the management 
of the piggy-backed resources. This is typically a poor fit.  For 
example, the piggy-backed resource may be of high individual value and 
exist in limited numbers (George's used file descriptors as an example). 
 A good GC will be a bad resource manager for such resources.


There are many types of resources that need to be managed in complex 
systems. Thinking that a GC will serve as a good management foundation 
for most of those resources is just naive.


I previously made some other comments that relate to this issue at 
http://wiki.ecmascript.org/doku.php?id=strawman:weak_refs#allen_wirfs-brock_20111219 
In particular, see the backstop discussion.





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-21 Thread Jim Blandy

On 03/19/2014 04:39 PM, Kyle Huey wrote:

Short of not implementing things in JS, what ideas do people have for
fixing these issues?  We have some ideas of how to add helpers to
scope these things to the lifetime of the window (perhaps by adding an
API that returns a promise that is resolved at inner-window-destroyed
to provide a good cleanup hook that is not global) but that doesn't
help with objects intended to have shorter lifetimes.  Is it possible
for us to implement some sort of useful weak reference in JS?


The general principle of GC is that an object is removed when doing so 
could have no visible effect.


What if these DOM nodes could use a special class of observers / 
listeners that automatically set themselves aside when the node is 
deleted from the document, and re-instate themselves if the node is 
re-inserted in the document? Similarly for when the window goes away.


Then your behavior would be well-defined regardless of when GC happens 
or doesn't happen.


People always want the GC to help them sort out these sorts of problems; 
but a good GC is tightly tuned for its workload, and trying to adapt it 
to serve similar purposes is generally not a path to happiness.


It's better to identify the points at which an observer becomes useless 
--- hence my suggestion that element insertion and removal be the 
trigger for the corresponding observers / listeners.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-21 Thread Jim Blandy

On 03/21/2014 03:34 PM, Jim Blandy wrote:

What if these DOM nodes could use a special class of observers /
listeners that automatically set themselves aside when the node is
deleted from the document, and re-instate themselves if the node is
re-inserted in the document? Similarly for when the window goes away.


Instead of addObserver or addMessageListener, you'd have 
observeWhileInserted or listenWhileInserted. Implemented in some clever 
and efficient way to avoid thrashing during heavy DOM manipulation.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-21 Thread Jim Blandy

On 03/21/2014 05:03 PM, Boris Zbarsky wrote:

On 3/21/14 6:34 PM, Jim Blandy wrote:
I don't believe there are any DOM nodes involved in the situation that
Kyle described at the start of this thread...


It's true that when I read, We are discovering a lot of leaks in JS 
implemented DOM objects, I wasn't sure what he was referring to...


But the same question carries over: isn't there some way to tie the 
registration / unregistration of observers / listeners to something more 
directly connected to the notification recipient becoming uninteresting?


That is, there's usually some point well before the notification 
recipient becomes garbage that it becomes uninteresting. Hence my 
mention of DOM node insertion / removal.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Exact rooting is now enabled on desktop

2014-01-17 Thread Jim Blandy

On 01/17/2014 01:24 PM, Terrence Cole wrote:

Exact stack rooting is now enabled by default on desktop builds of firefox.


I've never heard of a major project escaping from conservative GC once 
it had entered that state of sin; nor have I heard of anyone 
implementing a moving collector after starting with a non-moving collector.


So, doing *both* is impressive. I hope it pays off big!

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Let's never, ever, shut down NSS -- even in debug builds

2013-02-04 Thread Jim Blandy
On 02/01/2013 09:49 PM, Philip Chee wrote:
 On Thu, 31 Jan 2013 02:40:01 +0100, Robert Kaiser wrote:
 Robert Relyea schrieb:
 Switching to SQLite would make this a non-issue.

 Is there a plan to do this? An open bug? Someone working on it?

 Robert Kaiser
 
 Doesn't SQLite come with its own set of issues? I vaguely remember the
 urlclassifier code moving from SQLite to something else because of
 performance issues.

I've never heard a good explanation for why SQLite performed badly there
(or, the explanation I heard didn't make sense to me...), so I wouldn't
assume that was SQLite's fault. I know very picky people who swear by
SQLite; I wouldn't dismiss it out of hand. It just needs to be used
attentively.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform