BHR Project Status

2017-09-20 Thread Michael Layzell
In the last few months we've been putting work into making the data which
we collect from the Background Hang Reporter (BHR) more usable and
actionable. We use BHR to measure the frequency and cause of browser hangs
(when the main thread's event loop doesn't process events for 128ms or
longer). The goal being to collect information which lets us improve
Firefox's responsiveness by reducing the frequency of main thread hangs.

On the data collection side, the BHR stack walking code has been rewritten
to take advantage of Gecko Profiler internals. This reduced code
duplication, and enables us to take advantage of Gecko Profiler features
like JS stack interleaving. In addition, the ping submission logic has been
rewritten to perform less work on main thread, and submit hang information
outside of the main ping. This let us began collecting much more data,
including interleaved chrome-js/native stack frame information for all
hangs, and information about the browser's state, such as pending input
events. Platform support has also been expanded from win32 to include
linux64, win64 and macOS.

Doug Thayer has written a visualizer for the collected data called
hangs.html (https://arewesmoothyet.com), based on the perf.html profiler
viewer. This interface allows analysis of the change in frequency of
specific hangs over time, lots of tools for filtering through hang
information, as well as a profiler-like interface for drilling into
specific hang stacks to determine what might be causing the problems. Doug
is actively working on adding new features to the UI to improve filtering
and make it easier to get good results from the data, but we're already
finding and fixing important bugs. Some bugs which have been fixed include
bug 1393597 where we discovered that a synchronous GC on an edge case was
having more of a performance impact than we expected, and bug 1381465 where
we observed and prioritized the fixing of main thread I/O in the content
process.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Eiminating nsIDOM* interfaces and brand checks

2017-09-01 Thread Michael Layzell
Accidentally sent this off-list.

On Fri, Sep 1, 2017 at 12:14 PM, Michael Layzell <mich...@thelayzells.com>
wrote:

> I personally like the style of (2) the most, The isWhatever style methods
> are too verbose, and I don't think that adding more code which depends on
> behavior we might want to remove from Firefox is a good idea.
>
> I'd probably not use the name `is` though, I'd be into ehsan's suggestion
> of `isInstanceOf` (or some variant on that) much more. If we make it wordy
> enough (e.g. isInstanceOf rather than is) then it'll be easier to change in
> the future if the web platform ever implements a good solution to the
> branding problem.
>
> On Fri, Sep 1, 2017 at 11:01 AM, Boris Zbarsky <bzbar...@mit.edu> wrote:
>
>> Now that we control all the code that can attempt to touch
>> Components.interfaces.nsIDOM*, we can try to get rid of these interfaces
>> and their corresponding bloat.
>>
>> The main issue is that we have consumers that use these for testing what
>> sort of object we have, like so:
>>
>>   if (obj instanceof Ci.nsIDOMWhatever)
>>
>> and we'd need to find ways to make that work.  In some cases various
>> hacky workarounds are possible in terms of property names the object has
>> and maybe their values, but they can end up pretty non-intuitive and
>> fragile.  For example, this:
>>
>>   element instanceof Components.interfaces.nsIDOMHTMLEmbedElement
>>
>> becomes:
>>
>>   element.localName === "embed" &&
>>   element.namespaceURI === "http://www.w3.org/1999/xhtml;
>>
>> and even that is only OK at the callsite in question because we know it
>> came from http://searchfox.org/mozilla-central/rev/51b3d67a5ec1758bd2f
>> e7d7b6e75ad6b6b5da223/dom/interfaces/xul/nsIDOMXULCommandDis
>> patcher.idl#17 and hence we know it really is an Element...
>>
>> Anyway, we need a replacement.  Some possible options:
>>
>> 1)  Use "obj instanceof Whatever".  The problem is that we'd like to
>> maybe kill off the cross-global instanceof behavior we have now for DOM
>> constructors.
>>
>> 2)  Introduce chromeonly "is" methods on all DOM constructors.  So
>> "HTMLEmbedElement.is(obj)".  Possibly with some nicer but uniform name.
>>
>> 3)  Introduce chromeonly "isWhatever" methods (akin to Array.isArray) on
>> DOM constructors.  So "HTMLEmbedElement.isHTMLEmbedElement(obj)".  Kinda
>> wordy and repetitive...
>>
>> 4)  Something else?
>>
>> Thoughts?  It really would be nice to get rid of some of this stuff going
>> forward.  And since the web platform seems to be punting on branding,
>> there's no existing solution we can use.
>>
>> -Boris
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship:

2017-07-31 Thread Michael Layzell
This also seems like a feature which some users may want to disable, for
example in order to reduce bandwidth usage on certain websites (I'm not
sure how bad the impact would be bug *shrug*).

I imagine we should add a pref if possible before shipping this feature.

On Mon, Jul 31, 2017 at 1:25 PM, Josh Matthews 
wrote:

> Why is there no preference to control it? I thought that was standard
> practice for new features to make it easier to unship them if we discover a
> problem that makes it difficult to back out.
>
> Cheers,
> Josh
>
>
> On 7/31/17 1:14 PM, Dragana Damjanovic wrote:
>
>> As of Firefox 56, I intent to ship link rel=preload. The feature is
>> developed in bug 1222633 <
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1222633>. There is no pref
>> for
>> this feature, so it will be shipped directly.
>>
>> **Bug to turn on by default**:
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1222633
>>
>> **Link to standard**:
>> https://w3c.github.io/preload/ (This is still a draft)
>>
>> dragana
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-10 Thread Michael Layzell
On Mon, Jul 10, 2017 at 9:41 AM, smaug  wrote:

> ipdl? or do you mean idl? or perhaps webidl?
> Also, xpconnect doesn't deal with rust, and our chrome code still heavily
> relies on idl+xpconnect.
>

 I have written https://bugzilla.mozilla.org/show_bug.cgi?id=1293362, which
while I'm not actively working on, could be used to help close that
idl+xpconnect gap a bit. I've definitely had some opposition to the idea of
adding rust bindings to XPCOM interfaces because we're theoretically trying
to kill them however. If we decide that getting good bindings is a thing
which we want in order to to develop more stuff in rust I can polish the
patches back up.

memory management. As far as I know there is no reasonable way to deal with
> cycle collection (which means also no reasonable way to have references to
> JS objects).
>

I don't have my WIP cycle collection with rust XPCOM interfaces patches on
that bug, but both of these things are doable with XPCOM bindings for rust.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-23 Thread Michael Layzell
I had it pointed out to me on IRC by jcristau that `chronic` already exists
as a Unix utility from moreutils:
https://manpages.debian.org/jessie/moreutils/chronic.1.en.html

It does pretty much the exact same thing as nowarn, but is probably better
written ^.^.


On Tue, May 23, 2017 at 11:56 AM, Michael Layzell <mich...@thelayzells.com>
wrote:

> I don't have enough time to work on a proper solution, but I wrote a
> simple rust wrapper which just consumes stderr from the wrapped process,
> and doesn't write it out unless the internal command exited with a non-zero
> exit code. I figured I'd post it in case anyone else finds it useful.
> https://github.com/mystor/nowarn
>
> Very hacky, but it helps clean up the spam of compiler warnings to let me
> actually find my build errors.
>
> On Mon, May 22, 2017 at 7:16 PM, Karl Tomlinson <mozn...@karlt.net> wrote:
>
>> On Sat, 20 May 2017 14:59:11 -0400, Eric Rescorla wrote:
>>
>> > On Sat, May 20, 2017 at 1:16 PM, Kris Maglione <kmagli...@mozilla.com>
>> > wrote:
>> >
>> >> On Sat, May 20, 2017 at 08:36:13PM +1000, Martin Thomson wrote:
>> >>
>> >>> Hmm, these are all -Wsign-compare issues bar one, which is fixed
>> >>> upstream.  We have an open bug tracking the warnings
>> >>> (https://bugzilla.mozilla.org/show_bug.cgi?id=1307958 and
>> specifically
>> >>> https://bugzilla.mozilla.org/show_bug.cgi?id=1212199 for NSS).
>> >>>
>> >>> NSS is built with -Werror separately, but disables errors on
>> >>> -Wsign-compare.  Disabling those warnings for a Firefox build of NSS
>> >>> wouldn't be so bad now that we share gyp config.  Based on a recent
>> >>> build, that's 139 messages (add 36 if you want to add nspr).
>> >>>
>> >>
>> >> Is there a particularly good reason that NSS needs to disable
>> >> -Wsign-compare? That seems like a footgun waiting to happen,
>> especially in
>> >> crypto code.
>> >
>> >
>> > Like many other pieces of old code, there are a lot of things that
>> trigger
>> > compiler warnings which we have been progressively removing, but we
>> > haven't removed these yet. Ideally we would have some tooling that
>> > would distinguish new warnings from old warnings, but absent that,
>> > until we fix all the existing warnings it's either disable -Werror or
>> > disable this particular warning. It's not something we're particularly
>> > happy about.
>>
>> -Wno-error=sign-compare is also an option,
>> but a wall of sign-compare warnings may not be helpful in general.
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-23 Thread Michael Layzell
I don't have enough time to work on a proper solution, but I wrote a simple
rust wrapper which just consumes stderr from the wrapped process, and
doesn't write it out unless the internal command exited with a non-zero
exit code. I figured I'd post it in case anyone else finds it useful.
https://github.com/mystor/nowarn

Very hacky, but it helps clean up the spam of compiler warnings to let me
actually find my build errors.

On Mon, May 22, 2017 at 7:16 PM, Karl Tomlinson  wrote:

> On Sat, 20 May 2017 14:59:11 -0400, Eric Rescorla wrote:
>
> > On Sat, May 20, 2017 at 1:16 PM, Kris Maglione 
> > wrote:
> >
> >> On Sat, May 20, 2017 at 08:36:13PM +1000, Martin Thomson wrote:
> >>
> >>> Hmm, these are all -Wsign-compare issues bar one, which is fixed
> >>> upstream.  We have an open bug tracking the warnings
> >>> (https://bugzilla.mozilla.org/show_bug.cgi?id=1307958 and specifically
> >>> https://bugzilla.mozilla.org/show_bug.cgi?id=1212199 for NSS).
> >>>
> >>> NSS is built with -Werror separately, but disables errors on
> >>> -Wsign-compare.  Disabling those warnings for a Firefox build of NSS
> >>> wouldn't be so bad now that we share gyp config.  Based on a recent
> >>> build, that's 139 messages (add 36 if you want to add nspr).
> >>>
> >>
> >> Is there a particularly good reason that NSS needs to disable
> >> -Wsign-compare? That seems like a footgun waiting to happen, especially
> in
> >> crypto code.
> >
> >
> > Like many other pieces of old code, there are a lot of things that
> trigger
> > compiler warnings which we have been progressively removing, but we
> > haven't removed these yet. Ideally we would have some tooling that
> > would distinguish new warnings from old warnings, but absent that,
> > until we fix all the existing warnings it's either disable -Werror or
> > disable this particular warning. It's not something we're particularly
> > happy about.
>
> -Wno-error=sign-compare is also an option,
> but a wall of sign-compare warnings may not be helpful in general.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-19 Thread Michael Layzell
With regard to ALLOW_COMPILER_WARNINGS, I'm strongly of the opinion that we
should be providing an option to hide all warnings from modules marked as
ALLOW_COMPILER_WARNINGS now, as they are only in "external" libraries which
most of us are not working on, and they really clog up my compiler output
and make me have to scroll up many many pages in order to see the build
errors which are actually my fault. (see this bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1301688). Unfortunately, I
don't know enough about the build system to write a patch to do this.

On Fri, May 19, 2017 at 2:59 PM, David Major  wrote:

> I'm not sure if this is exactly the same as the ALLOW_COMPILER_WARNINGS
> that we use for warnings-on-errors, but FWIW as of a couple of months
> ago I cleaned out the last warning-allowance in our "own" code.
> ALLOW_COMPILER_WARNINGS usage is now only in external libraries (I'm
> counting NSS and NSPR as "external" for this purpose since I can't land
> code to m-c directly).
>
> On Fri, May 19, 2017, at 02:55 PM, Kris Maglione wrote:
> > I've actually been meaning to post about this, with some questions.
> >
> > I definitely like that we now print warnings at the end of builds, since
> > it
> > makes them harder to ignore. But the current amount of warnings spew at
> > the
> > end of every build is problematic, especially when a build fails and I
> > need
> > to scroll up several pages to find out why.
> >
> > Can we make some effort to get clean warnings output at the end of
> > standard
> > builds? A huge chunk of the warnings come from NSS and NSPR, and should
> > be
> > easily fixable. Most of the rest seem to come from vendored libraries,
> > and
> > should probably just be whitelisted if we can't get fixes upstreamed.
> >
> > On Fri, May 19, 2017 at 11:44:08AM -0700, Gregory Szorc wrote:
> > >`mach build` attempts to parse compiler warnings to a persisted
> "database."
> > >You can view a list of compiler warnings post build by running `mach
> > >warnings-list`. The intent behind this feature was to make it easier to
> > >find and fix compiler warnings. After all, something out of sight is
> out of
> > >mind.
> > >
> > >There have been a few recent changes to increase the visibility of
> compiler
> > >warnings with the expectation being that raising visibility will
> increase
> > >the chances of someone addressing them. After all, a compiler warning is
> > >either a) valid and should be fixed or b) invalid and should be ignored.
> > >Either way, a compiler warning shouldn't exist.
> > >
> > >First, `mach build` now prints a list of all parsed compiler warnings at
> > >the end of the build. More details are in the commit message:
> > >https://hg.mozilla.org/mozilla-central/rev/4abe611696ab
> > >
> > >Second, Perfherder now records the compiler warning count as a metric.
> This
> > >means we now have alerts when compiler warnings are added or removed,
> like
> > >[1]. And, we can easily see graphs of compiler warnings over time, like
> > >[2]. The compiler warnings are also surfaced in the "performance" tab of
> > >build jobs on Treeherder, like [3].
> > >
> > >The Perfherder alerts and tracking are informational only: nobody will
> be
> > >backing you out merely because you added a compiler warning. While the
> > >possibility of doing that now exists, I view that decision as up to the
> C++
> > >module. I'm not going to advocate for it. So if you feel passionately,
> take
> > >it up with them :)
> > >
> > >Astute link clickers will note that the count metrics in CI are often
> noisy
> > >- commonly fluctuating by a value of 1-2. I suspect there are race
> > >conditions with interleaved process output and/or bugs in the compiler
> > >warnings parser/aggregator. Since the values are currently advisory
> only,
> > >I'm very much taking a "perfect is the enemy of good" attitude and have
> no
> > >plans to improve the robustness.
> > >
> > >[1] https://treeherder.mozilla.org/perf.html#/alerts?id=6700
> > >[2] https://mzl.la/2qFN0me and https://mzl.la/2rAkWR5
> > >[3]
> > >https://treeherder.mozilla.org/#/jobs?repo=autoland;
> selectedJob=100509284
> > >___
> > >dev-platform mailing list
> > >dev-platform@lists.mozilla.org
> > >https://lists.mozilla.org/listinfo/dev-platform
> >
> > --
> > Kris Maglione
> > Senior Firefox Add-ons Engineer
> > Mozilla Corporation
> >
> > The X server has to be the biggest program I've ever seen that doesn't
> > do anything for you.
> >   --Ken Thompson
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org

Re: Gecko Profiler Win64 stability

2017-05-19 Thread Michael Layzell
You've done a fantastic job of making it more stable - thank you. I'll make
sure to flag you on any more synchronization issues I run into.

On Thu, May 18, 2017 at 3:04 PM, David Major  wrote:

> Hi,
>
> The Gecko Profiler used to be notoriously unstable on 64-bit Windows. (If
> you're curious: it's mostly because the unwinding rules for Win64 require
> the system libraries to do a lot of synchronization, which our stack walker
> would often call in ways that could deadlock.)
>
> Ever since the last Quantum Flow work week, I've been working on fixing
> these issues. As of this week's nightlies, all of the hangs that have been
> reported to me are resolved fixed. There might be some rare corner cases
> still remaining, but I'm confident that the most frequent problems have
> been addressed.
>
> So my ask is: if you've previously been deterred from using the profiler
> on Win64 builds, please give it another try! Flag me on any issues you find
> and/or mark them blocking bug 1366030. I will treat these bugs with high
> priority since I want to keep Quantum activities unblocked.
>
> Thanks!
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread Michael Layzell
On Tue, May 9, 2017 at 12:05 PM, Boris Zbarsky  wrote:

> On 5/9/17 11:41 AM, Nathan Froyd wrote:
>
>> I think I would feel a little
>> better about this rule if we permitted it only for types that deleted
>> assignment operators.  Not sure if that's really practical to enforce.
>>
>
> Hmm.  I wonder what happens right now if you try to invoke
> nsINode::operator=
>
> But yes, having such a restriction would make sense to me if we can do it.


It wouldn't be too difficult to have a static analysis which complains if a
RefCounted object (An object with AddRef and Release methods) is assigned
into at all, with the assumption that that will do The Wrong Thing™ 100% of
the time.

This of course would only be caught on infrastructure because most people
don't run the SA locally (we have a bug to make this easier which Andi is
working on), but that might be OK as none of this code would get into the
tree.


>
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mixing nsresult and Result code

2017-05-08 Thread Michael Layzell
I also don't like the NS_TRY name, it seems too close to MOZ_TRY. I would
prefer to avoid names which are identical except s/NS/MOZ/.

Perhaps NSRESULT_TRY would be a better name? It makes it clear that it is
performing the try against the nsresult type.

On Mon, May 8, 2017 at 10:17 AM, Ehsan Akhgari 
wrote:

> I think these seem like valuable additions to nscore.h.  It would be
> helpful to extend these facilities that would allow more code to use the
> Result-based programming model.
>
> (I'm not too much of a fan of the NS_TRY name, but can't think of a better
> name myself... :/ )
>
>
>
> On 05/07/2017 03:34 PM, Kris Maglione wrote:
>
>> I've been trying to write most of my new code to use Result.h as much as
>> possible. When I have to mix Result-based code with nsresult-based code,
>> though, things tend to get ugly, and I wind up writing helpers. At this
>> point I've wound up writing the same helpers in 3 or 4 different places, so
>> it may be time to try to standardize on something.
>>
>> The helpers I've been using look something like:
>>
>>  template <>
>>  class MOZ_MUST_USE_TYPE GenericErrorResult
>>  {
>>nsresult mErrorValue;
>>
>>template friend class Result;
>>
>>  public:
>>explicit GenericErrorResult(nsresult aErrorValue) :
>> mErrorValue(aErrorValue) {}
>>
>>operator nsresult() { return mErrorValue; }
>>  };
>>
>>  static inline Result
>>  WrapNSResult(PRStatus aRv)
>>  {
>>  if (aRv != PR_SUCCESS) {
>>  return Err(NS_ERROR_FAILURE);
>>  }
>>  return Ok();
>>  }
>>
>>  static inline Result
>>  WrapNSResult(nsresult aRv)
>>  {
>>  if (NS_FAILED(aRv)) {
>>  return Err(aRv);
>>  }
>>  return Ok();
>>  }
>>
>>  #define NS_TRY(expr) MOZ_TRY(WrapNSResult(expr))
>>
>> And their use looks something like:
>>
>>  Result
>>  GetFile(nsIFile* aBase, const char* aChild)
>>  {
>>nsCOMPtr file;
>>NS_TRY(aBase->Clone(getter_AddRefs(file)));
>>NS_TRY(aBase->AppendNative(aChild));
>>
>>return Move(file);
>>  }
>>
>>  nsresult
>>  ReadFile(const char* aPath)
>>  {
>>nsCOMPtr file;
>>MOZ_TRY_VAR(file, GetFile(mBase, aPath));
>>
>>PRFileDesc* fd;
>>NS_TRY(file->OpenNSPRFileDesc(PR_RDONLY, 0, ));
>>
>>...
>>return NS_OK;
>>  }
>>
>> Where NS_TRY converts a nsresult or PRStatus to an appropriate Result or
>> GenericErrorResult, and the GenericErrorResult specialization
>> automatically converts an nsresult when used in nsresult-based code.
>>
>> Does this seem like the kind of thing we should implement in some
>> standard header, or would a different approach be better?
>>
>> -Kris
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proper way to return nsresult from Rust before Stylo is mandatory?

2017-03-31 Thread Michael Layzell
There are patches up for nserror bindings in bug 1320179 (
https://bugzilla.mozilla.org/show_bug.cgi?id=1320179).

If people have API design opinions, you can let me know on the bug.

On Mon, Mar 27, 2017 at 10:22 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com>
wrote:

> On 2017-03-27 5:10 PM, Michael Layzell wrote:
> > I don't think it would be too hard. At one point I had a WIP patch which
> > implemented it, but I would have to dig it up again.
> >
> > I'll see if I can get a patch up for the crate in my spare time.
>
> Thank you!
>
> > On Sun, Mar 26, 2017 at 11:26 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com
> > <mailto:ehsan.akhg...@gmail.com>> wrote:
> >
> > On 2017-03-17 10:08 AM, Michael Layzell wrote:
> > > I don't think we have any particularity good tools for this right
> now. A
> > > while ago I filed https://bugzilla.mozilla.org/
> show_bug.cgi?id=1320179
> > <https://bugzilla.mozilla.org/show_bug.cgi?id=1320179> to
> > > add a separate crate like the nsstring crate which provides the
> nsresult
> > > bindings. If we are starting to get more use cases for it we
> probably want
> > > to implement something like it which moves the error code
> definition code
> > > into python or similar, and then generates both rust and C++
> bindings in
> > > the outdir.
> >
> > I think the right way to do it is to have an nsresult crate as you
> > suggest.  How much work would that be?
> >
> > As far as code parsing goes, we already have nsError.h, so why can't
> we
> > just use rustbindgen?  But I guess moving the definitions elsewhere
> and
> > generating the headers out of them is easy as well.  The build
> system's
> > support for generated files is pretty good as far as I know.
> Whichever
> > way is easier is better, I guess!
> >
> >
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proper way to return nsresult from Rust before Stylo is mandatory?

2017-03-27 Thread Michael Layzell
I don't think it would be too hard. At one point I had a WIP patch which
implemented it, but I would have to dig it up again.

I'll see if I can get a patch up for the crate in my spare time.

On Sun, Mar 26, 2017 at 11:26 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com>
wrote:

> On 2017-03-17 10:08 AM, Michael Layzell wrote:
> > I don't think we have any particularity good tools for this right now. A
> > while ago I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1320179
> to
> > add a separate crate like the nsstring crate which provides the nsresult
> > bindings. If we are starting to get more use cases for it we probably
> want
> > to implement something like it which moves the error code definition code
> > into python or similar, and then generates both rust and C++ bindings in
> > the outdir.
>
> I think the right way to do it is to have an nsresult crate as you
> suggest.  How much work would that be?
>
> As far as code parsing goes, we already have nsError.h, so why can't we
> just use rustbindgen?  But I guess moving the definitions elsewhere and
> generating the headers out of them is easy as well.  The build system's
> support for generated files is pretty good as far as I know.  Whichever
> way is easier is better, I guess!
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Third Party Library Alert Service

2017-03-17 Thread Michael Layzell
I also want this information programmatically for the clang plugin at some
point. It will be useful for many of the checks which we can't enforce on
third party code due to not being able to / wanting to modify them
directly. Right now we have a decent number of check-specific whitelists
which could be possibly consolidated if this list was easily available.

On Fri, Mar 17, 2017 at 2:49 PM, Ted Mielczarek  wrote:

> On Fri, Mar 17, 2017, at 02:40 PM, trit...@mozilla.com wrote:
> > On Friday, March 17, 2017 at 1:35:15 PM UTC-5, Sylvestre Ledru wrote:
> > > Looks like we are duplicating some contents and efforts with:
> > > https://dxr.mozilla.org/mozilla-central/source/tools/
> rewriting/ThirdPartyPaths.txt
> > > Any plan to "merge" them?
> >
> > There is now! (Or, well, there will be one.) =)
> >
> > If anyone makes use of this file beyond it just serving as a reference,
> > or if there is tooling around this file please talk tell me about it!
>
> We've discussed having a better way to annotate third-party libraries in
> the tree many times before, but never made any real progress. There are
> lots of reasons to want that info--ignoring those directories for
> automated rewrites (per Sylvestre's link) or some kinds of lint
> checking, tracking upstream fixes, making it easier to update our
> vendored copies of code in a consistent way across projects, etc. I
> wrote up a proposal not long ago[1] that covered some related things, it
> hasn't gone anywhere but you might find it interesting. Specifically one
> thing I'd love to see us do is something Chromium does--they have "FYI
> bots" that will do try builds against the latest versions of their
> dependencies, and a bot that will submit patches that can be landed when
> those FYI builds are green, so they can easily keep up-to-date when
> upstream updates don't break the build.
>
> -Ted
>
> 1.
> https://docs.google.com/document/d/1yjGTR2io97p-
> 7ztArXl2tFbmCkZItqj9kWUJJYAdekw/edit?usp=sharing
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proper way to return nsresult from Rust before Stylo is mandatory?

2017-03-17 Thread Michael Layzell
I don't think we have any particularity good tools for this right now. A
while ago I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1320179 to
add a separate crate like the nsstring crate which provides the nsresult
bindings. If we are starting to get more use cases for it we probably want
to implement something like it which moves the error code definition code
into python or similar, and then generates both rust and C++ bindings in
the outdir.

On Fri, Mar 17, 2017 at 6:03 AM, Henri Sivonen  wrote:

> It seems that our Rust bindings for nsresult are part of Stylo, but
> Stylo isn't yet a guaranteed part of the build.
>
> Until Stylo becomes a mandatory part of the build, what's the proper
> way to return nsresult from Rust such that it works with or without
> Stylo enabled?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Michael Layzell
I'm pretty sure that by the time we're reaching that number of cores we'll
be blocked on preprocessing, as preprocessing occurs on the local machine
(where the header files are), and on network I/O. I imagine that peak
efficiency is well below 2k machines.

In addition, there's some unavoidable latency due to the 1+ minute spent
doing configure/export and such at the start/end of the build.

I imagine that it will be hard to get build times below 3-4 minutes without
eliminating those constant time overheads.

On Thu, Mar 9, 2017 at 8:16 AM, Mike Hommey  wrote:

> On Thu, Mar 09, 2017 at 07:45:13AM -0500, Ted Mielczarek wrote:
> > On Wed, Mar 8, 2017, at 05:43 PM, Ehsan Akhgari wrote:
> > > On 2017-03-08 11:31 AM, Simon Sapin wrote:
> > > > On 08/03/17 15:24, Ehsan Akhgari wrote:
> > > >> What we did in the Toronto office was walked to people who ran
> Linux on
> > > >> their desktop machines and installed the icecream server on their
> > > >> computer.  I suggest you do the same in London.  There is no need to
> > > >> wait for dedicated build machines.  ;-)
> > > >
> > > > We’ve just started doing that in the Paris office.
> > > >
> > > > Just a few machines seem to be enough to get to the point of
> diminishing
> > > > returns. Does that sound right?
> > >
> > > I doubt it...  At one point I personally managed to saturate 80 or so
> > > cores across a number of build slaves at the office here.  (My personal
> > > setup has been broken so unfortunately I have been building like a
> > > turtle for a while now myself...)
> >
> > A quick check on my local objdir shows that we have ~1800 source files
> > that get compiled during the build:
> > $ find /build/debug-mozilla-central/ -name backend.mk -o -name
> > ipdlsrcs.mk -o -name webidlsrcs.mk | xargs grep CPPSRCS | grep -vF
> > 'CPPSRCS += $(UNIFIED_CPPSRCS)' | cut -f3- -d' ' | tr ' ' '\n' | wc -l
> > 1809
> >
> > That's the count of actual files that will be passed to the C++
> > compiler. The build system is very good at parallelizing the compile
> > tier nowadays, so you should be able to scale the compile tier up to
> > nearly that many cores. There's still some overhead in the non-compile
> > tiers, but if you are running `mach build binaries` it shouldn't matter.
>
> Note that several files take more than 30s to build on their own, so a
> massively parallel build can't be faster than that + the time it takes
> to link libxul. IOW, don't expect to be able to go below 1 minute, even
> with 2000 cores.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New ErrorResult::Throw Static Analysis

2017-03-08 Thread Michael Layzell
Bug 1331434 is on inbound, and it adds a new static analysis and an
attribute called MOZ_MUST_RETURN_FROM_CALLER. This attribute is currently
attached to ErrorResult::Throw.

This analysis is intended to help catch buggy code which performs logic
like:

  if (!foo) {
aRv.Throw(NS_ERROR_FAILURE);
  }
  foo->Bar();

Where the intended behavior is to return immediately after calling Throw.

If you find yourself having to perform additional cleanup after your calls
to Throw, you can instead use ThrowWithCustomCleanup, which does not have
the annotation.

If there are other methods like Throw which callers should usually return
immediately after calling, consider adding the MOZ_MUST_RETURN_FROM_CALLER
attribute. :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about MOZ_MUST_USE and [must_use]

2017-01-20 Thread Michael Layzell
It wouldn't be too hard to automatically generate Result
wrapper methods automatically for all of our XPIDL interfaces. I had a
prototype branch at one point which did this on the rust side, as part of
my now-dead rust XPIDL bindings.

That would convert a good number of our fallable calls to using Result. We might even be able to look into doing something similar with
our WebIDL bindings and ErrorResult.

On Fri, Jan 20, 2017 at 8:59 AM, Ted Mielczarek  wrote:

> On Fri, Jan 20, 2017, at 08:19 AM, Nicolas B. Pierron wrote:
> > > The Rust case is helped by the fact that `Result` is the defacto type
> > > for returning success or error, and it's effectively `must_use`. We
> > > don't have a similar default convention in C++.
> >
> > We have
> >
> > http://searchfox.org/mozilla-central/rev/30fcf167af036aeddf322de44a2fad
> d370acfd2f/mfbt/Result.h#173
> >
> > we just have to make it the default convention now.
>
> Yes, and this is great, I just meant that in Rust 99% of code that's
> returning success/failure is using `Result` because it's in the standard
> library, whereas in C++ there's not an equivalent. `mozilla::Result` is
> great and I hope we can convert lots of Gecko code to use it, but we
> have *a lot* of Gecko code that's not already there.
>
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement and Ship: Large-Allocation Header

2017-01-19 Thread Michael Layzell
On Thu, Jan 19, 2017 at 3:58 AM,  wrote:

> Hey,
>
> a bunch of questions:
>
> how will you handle synchronous scripting between that process and other
> frames from the same origin that aren't in that dedicated process?
>

If there are other toplevel windows within the loading window's unit of
related browsing contexts, the Large-Allocation header will be ignored, and
a message will be printed to the console. There was no other easy way which
was in scope for the timeline of this project which didn't cause
web-visible side effects.

In an earlier version of the design, we considered severing these
synchronous connections but we decided on making the smaller web-compatible
change of simply ignoring the header in these problematic edge cases.


> Will it be possible for an iframe to send this header? What will happen
> then?
>

This header is ignored when it is seen while loading an iframe, like above.


> What will happen if a site requests a large allocation, but then runs out
> of address space/memory before it needs the allocation?
>

The current implementation of Large-Header ignores the size property of the
header. Instead, it always creates a new process, and loads the page into
it, performing no pre-allocation.

We were originally planning to support pre-allocation, but after testing we
determined that simply allocating a new process without the pre-allocation
would be sufficient to greatly improve allocation success rates.

In the future, we may decide to support pre-allocation. The design we were
considering would have involved allocating this memory at startup and
providing it to our JS engine. When a large allocation would be made that
reserved memory would be provided instead, with the remaining memory
returned to the OS.

We would also have a timer attached to the memory, as well as a memory
pressure listener, which if either of these events triggered before the
memory had been claimed, would free the allocation back to the OS.

When the website is done using the allocation, it would be returned to the
OS like usual.


> Will you somehow notify the site if you failed to allocate the requested
> memory?
>

The site will notice that it failed to allocate the requested memory when
it requests the memory, and the allocation fails. We don't provide any
other mechanism for notifying the website programmatically about
Large-Allocation status.


> After the first wasm program was run, will you return the memory? Will it
> stay reserved for future wasm programs on the same page?
>

See the above 2 answers.


>
> best
> -jochen
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement and Ship: Large-Allocation Header

2017-01-18 Thread Michael Layzell
This is true, but I'm not sure how this is worse than other mechanisms
which can currently be used to deny other pages resources, such as
allocating and leaking large amounts of memory or pegging the event loop.

On Wed, Jan 18, 2017 at 4:38 PM, Martin Thomson <m...@mozilla.com> wrote:

> On Thu, Jan 19, 2017 at 10:17 AM, Michael Layzell
> <mich...@thelayzells.com> wrote:
> > @Martin There is a pref (dom.ipc.processCount.webLargeAllocation -
> default
> > 2) which controls the maximum number of processes which may be allocated
> to
> > Large-Allocation processes. Any requests past that number
> (firefox-globally)
> > will not cause new processes to be created.
>
> That prevents unlimited spawning of processes, but doesn't stop a site
> from blocking another one from getting a new process.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement and Ship: Large-Allocation Header

2017-01-18 Thread Michael Layzell
@Martin There is a pref (dom.ipc.processCount.webLargeAllocation - default
2) which controls the maximum number of processes which may be allocated to
Large-Allocation processes. Any requests past that number
(firefox-globally) will not cause new processes to be created.

@Chris the Large-Allocation header acts as a hint. Currently I intend to
use dedicated processes on 64-bit as well as 32-bit, as there are some
minor differences in behavior due to there being multiple content processes
when loaded in the Large-Allocation process. In order to ensure that people
who are developing games don't run into these issues, I want to make them
(who will have 64-bit machines most likely) run into these issues before
reaching production. In the future, when we are shipping e10s-multi, we can
consider ignoring the large allocation hint on 64-bit architectures, as
there will be a much much lower chance of multiple content process only
differences affecting the program.

On Wed, Jan 18, 2017 at 4:03 PM, Martin Thomson <m...@mozilla.com> wrote:

> On Thu, Jan 19, 2017 at 9:21 AM, Michael Layzell
> <mich...@thelayzells.com> wrote:
> > Security & Privacy Concerns: none
>
> I doubt that this is true.  You have provided a way for sites to gain
> a whole process to themselves, on request.  Denial of service seems
> like something that would need to be considered if you intend to ship
> this.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Implement and Ship: Large-Allocation Header

2017-01-18 Thread Michael Layzell
Summary:
Games implemented on the web platform using WASM or asm.js use large
contiguous blocks of allocated memory as their backing store for game
memory. For complex games, these allocations can be quite large, sometimes
as large as 1GB. In 64-bit builds, we have no problem finding a large
enough section of virtual memory that we can allocate a large contiguous
hunk. Unfortunately normal use of Firefox quickly fragments the smaller
address space of 32-bit Firefox, meaning that these complex games often
fail to run.

The Large-Allocation header is a hint header for a document. It tells the
browser that the web content in the to-be-loaded page is going to want to
perform a large contiguous memory allocation. In our current implementation
of this feature, we handle this hint by starting a dedicated process for
the to-be-loaded document, and loading the document inside of that process
instead. Further top-level navigations in that "Fresh Process" will cause
the process to be discarded, and the browsing context will shift back to
the primary content process.

We hope to ship this header alongside WASM in Firefox 52 to enable game
developers to develop more intense games on our web platform. More details
on the implementation and limitations of this header and our implementation
can be found in the linked bug.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1331083

Link to standard: This feature is non-standard

Platform coverage: Firefox Desktop with e10s enabled

Target release: Fx52

Preference behind which this is implemented:
dom.largeAllocationHeader.enabled

DevTools bug: none

Do other browser engines implement this?
No, we are in conversations with Chrome about them potentially also
recognizing this hint.

Tests: Added as part of the implementation

Security & Privacy Concerns: none
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Error handling with MOZ_MUST_USE

2016-11-03 Thread Michael Layzell
We also have a static analysis attribute MOZ_MUST_USE_TYPE which can be
added to class declarations to ensure that those declarations are always
used when they are returned. We currently are using it to annotate
already_AddRefed values. If there are other types which universally should
be checked when used as a return value, that annotation may be a useful
shortcut.

On Thu, Nov 3, 2016 at 3:42 AM, Wei-Cheng Pan  wrote:

> Hi,
>
> As :njn announced before [1], we are trying to add MOZ_MUST_USE to all
> fail-able functions, which is tracking in bug 1268766 [2].
>
> We have bugs to track the progress for each directory.
> If anyone is interested, please feel free to take any of the bugs.
>
> We hope all reviewers also mind the following rules, so that new code
> may benefit by the static checking.
> I'll update the style guide to reflect these rules later.
>
> The attribute is meant to avoid possible fatal failure.
> All failure should be handled, reported, asserted, or at least caught by
> Unused (and better leave a comment, see bug 1301574 [3]).
>
> You can find the rule proposed by :njn in the bug comment [4].
> Because now we don't need to worry about NS_IMETHOD (see bug 1295825
> [5]), we have updated rules:
>
> 1. search all *.h and *.idl
> 2. add [must_use] for *.idl
> 3. add MOZ_MUST_USE for all functions that return |nsresult| or |bool|
> in *.h
>
> Also with some exceptions:
>
> 1. Predicates, getters which return |bool|.
>
> 2. IPC method implementation (e.g. bool RecvSomeMessage()).
>
> 3. Most callers just check the output parameter. e.g.:
>
> nsresult
> SomeMap::GetValue(const nsString& key, nsString& value);
>
> nsString value;
> rv = map->GetValue(key, value);
>
> If it failed, of course |value| will be empty.
> But if it succeed, |value| may still be empty because the stored value
> is empty in the first place.
> So just check |value| is a common idiom.
> This was also discussed in thread [1], and seems we still have no
> conclusion.
>
> Such grey zone will depend on each reviewers' own judgement.
>
> Wei-Cheng Pan
>
> [1]
> https://groups.google.com/forum/#!msg/mozilla.dev.platform/dS5Dz6SfO3w/
> avzIxb7CBAAJ
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=MOZ_MUST_USE
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1301574
> [4] https://bugzilla.mozilla.org/show_bug.cgi?id=1268766#c5
> [5] https://bugzilla.mozilla.org/show_bug.cgi?id=1295825
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Converting assertions into release assertions

2016-09-23 Thread Michael Layzell
We already have implemented that static analysis but haven't enabled it
yet, because static analysis doesn't run on windows machines:
https://bugzilla.mozilla.org/show_bug.cgi?id=1223932

On Fri, Sep 23, 2016 at 12:51 PM, Bobby Holley 
wrote:

> On Fri, Sep 23, 2016 at 9:30 AM, Ted Mielczarek 
> wrote:
>
> > On Fri, Sep 23, 2016, at 10:20 AM, Ehsan Akhgari wrote:
> > > On 2016-09-23 8:49 AM, Gijs Kruitbosch wrote:
> > > > Then this enables me to answer Ehsan's question. These are the builds
> > > > I've recently tried using (e.g. when debugging intermittents in VMs
> > > > because then I don't need to set up too much of a build env) and
> > they're
> > > > still very slow.
> > >
> > > Unfortunately I don't think we can do debug builds that are faster than
> > > that.  It's possible that the debugging checks that we perform in the
> > > cases you're trying to use the browser in are just prohibitively too
> > > expensive.  :(
> >
> > If someone had time, it might be interesting to profile something that
> > feels slow in an --enable-debug --enable-optimize build and see if
> > anything stands out. These builds will always be a little slower than
> > the builds we ship (because they're not PGOed), but it would be nice if
> > they were at least usable. Making them faster would also make our debug
> > test runs in automation faster, which is a nice win! (We spend *a lot*
> > of CPU time running debug tests in automation.)
> >
>
> Nathan looked at this a few years ago:
> https://bugzilla.mozilla.org/show_bug.cgi?id=972135
>
> There's some low-hanging fruit for anyone with static analysis savvy.
>
>
> >
> > -Ted
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New [must_use] property in XPIDL

2016-08-23 Thread Michael Layzell
We already have mozilla::Unused in mfbt/Unused.h. You can use it as
`mozilla::unused << SomethingReturningAValue();`. I believe that the
existing static analyses which look at unused variables respect this.

On Tue, Aug 23, 2016 at 9:47 AM, Eric Rescorla  wrote:

> Fair enough. I wouldn't be against introducing a separate unused marker for
> this purpose.
>
> -Ekr
>
>
> On Tue, Aug 23, 2016 at 6:40 AM, Benjamin Smedberg 
> wrote:
>
> > cast-to-void is commonly suggested as an alternative to an explicit
> unused
> > marking, and it is something that I wanted to use originally.
> > Unfortunately, we have not been able to make that work: this is primarily
> > because compilers often remove the cast-to-void as part of the parsing
> > phase, so it's not visible in the parse tree for static checkers.
> >
> > --BDS
> >
> > On Tue, Aug 23, 2016 at 9:19 AM, Eric Rescorla  wrote:
> >
> >> I'm indifferent to this particular case, but I think that rkent's point
> >> about static
> >> checking is a good one. Given that C++ has existing annotations that
> say:
> >>
> >> - This does not produce a useful return value (return void)
> >> - I am explicitly ignoring the return value (cast to void)
> >>
> >> And that we have (albeit imperfect) static checking tools that can
> detect
> >> non-use of
> >> return values, it seems like we would ultimately be better-off by using
> >> those tools
> >> to treat use of the return value by the default flagging a compiler
> error
> >> when that
> >> doesn't happen yet a third annotation rather than treating "use the
> return
> >> value
> >> somehow" as the default and flagging a compiler error when that doesn't
> >> happen.
> >> I appreciate that we have a lot of code that violates this rule now, so
> >> actually cleaning
> >> that up is a long process gradually converting the code base, but it has
> >> the advantage
> >> that once that's done then it just stays clean (just like any other
> >> -Werror
> >> conversion).
> >>
> >> -Ekr
> >>
> >>
> >> On Mon, Aug 22, 2016 at 5:03 PM, Bobby Holley 
> >> wrote:
> >>
> >> > On Mon, Aug 22, 2016 at 4:39 PM, R Kent James 
> wrote:
> >> >
> >> > > On 8/21/2016 9:14 PM, Nicholas Nethercote wrote:
> >> > > > I strongly encourage people to do likewise on
> >> > > > any IDL files with which they are familiar. Adding appropriate
> >> checks
> >> > > isn't
> >> > > > always easy
> >> > >
> >> > > Exactly, and I hope that you and others restrain your exuberance a
> >> > > little bit for this reason. A warning would be one thing, but a hard
> >> > > failure that forces developers to drop what they are doing and think
> >> > > hard about an appropriate check is just having you set YOUR
> priorities
> >> > > for people rather than letting people do what might be much more
> >> > > important work.
> >> > >
> >> > > There's lots of legacy code around that may or may not be worth the
> >> > > effort to think hard about such failures. This is really better
> suited
> >> > > for a static checker that can make a list of problems, which list
> can
> >> be
> >> > > managed appropriately for priority, rather than a hard error that
> >> forces
> >> > > us to drop everything.
> >> > >
> >> >
> >> > I don't quite follow the objection here.
> >> >
> >> > Anybody who adds such an annotation needs to get the tree green before
> >> they
> >> > land the annotation. Developers writing new code that ignores the
> >> nsresult
> >> > will get instant feedback (by way of try failure) that the developer
> of
> >> the
> >> > API thinks the nsresult needs to be checked. This doesn't seem like an
> >> > undue burden, and enforced-by-default assertions are critical to code
> >> > hygiene and quality.
> >> >
> >> > If your concern is the way this API change may break Thunderbird-only
> >> > consumers of shared XPCOM APIs, that's related to Thunderbird being a
> >> > non-Tier-1 platform, and pretty orthogonal to the specific change that
> >> Nick
> >> > made.
> >> >
> >> > bholley
> >> >
> >> >
> >> >
> >> > > :rkent
> >> > > ___
> >> > > dev-platform mailing list
> >> > > dev-platform@lists.mozilla.org
> >> > > https://lists.mozilla.org/listinfo/dev-platform
> >> > >
> >> > ___
> >> > dev-platform mailing list
> >> > dev-platform@lists.mozilla.org
> >> > https://lists.mozilla.org/listinfo/dev-platform
> >> >
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> >>
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust code in mozilla-central now builds via cargo

2016-08-08 Thread Michael Layzell
Thanks a ton for your work on this! This makes working on rust code in
gecko much easier, and makes importing libraries from the ecosystem simpler
as well (if a tad tedious).

On Mon, Aug 8, 2016 at 1:21 PM, Bobby Holley  wrote:

> +1. This is an important step forward. Thanks Nathan!
>
> On Mon, Aug 8, 2016 at 6:07 AM, Ted Mielczarek  wrote:
>
> > On Sun, Aug 7, 2016, at 06:06 PM, Nathan Froyd wrote:
> > > TL; DR: Bug 1231764 has landed on mozilla-central; the build system
> > > now invokes cargo to build all the Rust code in m-c.  This should
> > > result in a better Rust developer experience, as well as making it
> > > easier to import Rust libraries into m-c.
> >
> > Thanks so much for all your hard work on this! This is going to make a
> > huge difference to the future of Firefox and Gecko.
> >
> > -Ted
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Michael Layzell
I'm certain it's possible to get a windows build working, the problem is
that:

a) We would need to modify the client to understand cl-style flags (I don't
think it does right now)
b) We would need to create the environment tarball
c) We would need to make sure everything runs on windows

None of those are insurmountable problems, but this has been a small side
project which hasn't taken too much of our time. The work to get MSVC
working is much more substantial than the work to get macOS and linux
working.

Getting it such that linux distributes to darwin machines, and darwin
distributes to darwin machines is much easier. It wasn't done by us because
distributing jobs to people's laptops seems kinda silly, especially because
they may have a wifi connection, and as far as I know, basically every mac
in this office is a macbook.

The darwin machines simply need to add an `icecc` user, to run the build
jobs in, and then darwin-compatible toolchains need to be distributed to
all building machines.

On Mon, Jul 4, 2016 at 7:26 PM, Xidorn Quan <m...@upsuper.org> wrote:

> I hope it could support MSVC one day as well, and support distribute any
> job to macOS machines as well.
>
> In my case, I use Windows as my main development environment, and I have
> a personally powerful enough MacBook Pro. (Actually I additionally have
> a retired MBP which should still work.) And if it is possible to
> distribute Windows builds to Linux machines, I would probably consider
> purchasing another machine for Linux.
>
> I would expect MSVC to be something not too hard to run with wine. When
> I was in my university, I ran VC6 compiler on Linux to test my homework
> without much effort. I guess the situation shouldn't be much worse with
> VS2015. Creating the environment tarball may need some work, though.
>
> - Xidorn
>
> On Tue, Jul 5, 2016, at 07:36 AM, Benoit Girard wrote:
> > In my case I'm noticing an improvement with my mac distributing jobs to a
> > single Ubuntu machine but not compiling itself (Right now we don't
> > support
> > distributing mac jobs to other mac, primarily because we just want to
> > maintain one homogeneous cluster).
> >
> > On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch
> > <gijskruitbo...@gmail.com>
> > wrote:
> >
> > > On 04/07/2016 22:06, Benoit Girard wrote:
> > >
> > >> So to emphasize, if you compile a lot and only have one or two
> machines
> > >> on your 100mps or 1gbps LAN you'll still see big benefits.
> > >>
> > >
> > > I don't understand how this benefits anyone with just one machine
> (that's
> > > compatible...) - there's no other machines to delegate compile tasks
> to (or
> > > to fetch prebuilt blobs from). Can you clarify? Do you just mean "one
> extra
> > > machine"? Am I misunderstanding how this works?
> > >
> > > ~ Gijs
> > >
> > >
> > >
> > >> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch <
> gijskruitbo...@gmail.com
> > >> >
> > >> wrote:
> > >>
> > >> What about people not lucky enough to (regularly) work in an office,
> > >>> including but not limited to our large number of volunteers? Do we
> intend
> > >>> to set up something public for people to use?
> > >>>
> > >>> ~ Gijs
> > >>>
> > >>>
> > >>> On 04/07/2016 20:09, Michael Layzell wrote:
> > >>>
> > >>> If you saw the platform lightning talk by Jeff and Ehsan in London,
> you
> > >>>> will know that in the Toronto office, we have set up a distributed
> > >>>> compiler
> > >>>> called `icecc`, which allows us to perform a clobber build of
> > >>>> mozilla-central in around 3:45. After some work, we have managed to
> get
> > >>>> it
> > >>>> so that macOS computers can also dispatch cross-compiled jobs to the
> > >>>> network, have streamlined the macOS install process, and have
> refined
> > >>>> the
> > >>>> documentation some more.
> > >>>>
> > >>>> If you are in the Toronto office, and running a macOS or Linux
> machine,
> > >>>> getting started using icecream is as easy as following the
> instructions
> > >>>> on
> > >>>> the wiki:
> > >>>>
> > >>>>
> > >>>>
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
> > >>>>
> > >>>> If you are

Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Michael Layzell
I'm pretty sure he means one extra machine. For example, if you have a
laptop and a desktop, just adding the desktop into the network at home will
still dramatically improve build times (I think).

On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch <gijskruitbo...@gmail.com>
wrote:

> On 04/07/2016 22:06, Benoit Girard wrote:
>
>> So to emphasize, if you compile a lot and only have one or two machines
>> on your 100mps or 1gbps LAN you'll still see big benefits.
>>
>
> I don't understand how this benefits anyone with just one machine (that's
> compatible...) - there's no other machines to delegate compile tasks to (or
> to fetch prebuilt blobs from). Can you clarify? Do you just mean "one extra
> machine"? Am I misunderstanding how this works?
>
> ~ Gijs
>
>
>
>> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch <gijskruitbo...@gmail.com
>> >
>> wrote:
>>
>> What about people not lucky enough to (regularly) work in an office,
>>> including but not limited to our large number of volunteers? Do we intend
>>> to set up something public for people to use?
>>>
>>> ~ Gijs
>>>
>>>
>>> On 04/07/2016 20:09, Michael Layzell wrote:
>>>
>>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>>>> will know that in the Toronto office, we have set up a distributed
>>>> compiler
>>>> called `icecc`, which allows us to perform a clobber build of
>>>> mozilla-central in around 3:45. After some work, we have managed to get
>>>> it
>>>> so that macOS computers can also dispatch cross-compiled jobs to the
>>>> network, have streamlined the macOS install process, and have refined
>>>> the
>>>> documentation some more.
>>>>
>>>> If you are in the Toronto office, and running a macOS or Linux machine,
>>>> getting started using icecream is as easy as following the instructions
>>>> on
>>>> the wiki:
>>>>
>>>>
>>>> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
>>>>
>>>> If you are in another office, then I suggest that your office starts an
>>>> icecream cluster! Simply choose one linux desktop in the office, run the
>>>> scheduler on it, and put its IP in the Wiki, then everyone can connect
>>>> to
>>>> the network and get fast builds!
>>>>
>>>> If you have questions, myself, BenWa, and jeff are probably the ones to
>>>> talk to.
>>>>
>>>>
>>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Michael Layzell
If you saw the platform lightning talk by Jeff and Ehsan in London, you
will know that in the Toronto office, we have set up a distributed compiler
called `icecc`, which allows us to perform a clobber build of
mozilla-central in around 3:45. After some work, we have managed to get it
so that macOS computers can also dispatch cross-compiled jobs to the
network, have streamlined the macOS install process, and have refined the
documentation some more.

If you are in the Toronto office, and running a macOS or Linux machine,
getting started using icecream is as easy as following the instructions on
the wiki:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream

If you are in another office, then I suggest that your office starts an
icecream cluster! Simply choose one linux desktop in the office, run the
scheduler on it, and put its IP in the Wiki, then everyone can connect to
the network and get fast builds!

If you have questions, myself, BenWa, and jeff are probably the ones to
talk to.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Return-value-optimization when return type is RefPtr

2016-06-16 Thread Michael Layzell
We pass T* as an argument to a function too often for this to e practical.
The best solution is probably to remove the conversion from RefPtr&& to
T* which is I believe what froydnj is planning to do.

This requires ref qualifiers for methods, which isn't supported in MSVC
2013, but is supported in 2015. See
https://bugzilla.mozilla.org/show_bug.cgi?id=1280295

On Thu, Jun 16, 2016 at 12:27 PM, Eric Rescorla  wrote:

> Is it worth reconsidering removing implicit conversion from RefPtr to
> T*?
>
> -Ekr
>
>
> On Thu, Jun 16, 2016 at 9:50 AM, Boris Zbarsky  wrote:
>
> > On 6/16/16 3:15 AM, jww...@mozilla.com wrote:
> >
> >> I think that is the legacy when we don't have move semantics. Returning
> >> RefPtr won't incur ref-counting overhead and is more expressive and
> >> functional.
> >>
> >
> > Except for the footgun described in <
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1280296#c0>, yes?
> >
> > -Boris
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-04-29 Thread Michael Layzell
I have already written a basic analysis for use-after-move semantics which
just hasn't been reviewed yet. I'm not sure if it's what you'd be looking
for (take a look at the tests).

https://bugzilla.mozilla.org/show_bug.cgi?id=1186706

Michael

On Fri, Apr 29, 2016 at 12:14 AM, Jim Blandy  wrote:

> On Thu, Apr 28, 2016 at 8:22 PM,  wrote:
>
> > Jim Blandy於 2016年4月28日星期四 UTC+8下午1時51分15秒寫道:
> > > I don't really think it's a good example. TakeMediaIfKnown is
> accepting a
> > > UniquePtr as an inout parameter: it uses, and may modify,
> its
> > > value. It should take UniquePtr &.
> > IIUC, UniquePtr can't be an inout parameter for its unique semantics
> > which owns sole ownership of an object. The caller won't be able to use
> the
> > object in a meaningful way after the function returns.
> >
> >
> >
> I'm not sure I understand. Maybe it would help if we had a more concrete
> example to talk about:
>
> $ cat unique-inout.cc
> #include 
> #include "mozilla/UniquePtr.h"
>
> using mozilla::UniquePtr;
>
> struct MediaFile {
>   const char *name;
>   MediaFile(const char *name) : name(name) { printf("constructing %s\n",
> name); }
>   ~MediaFile() { printf("destructing %s\n", name); }
> };
>
> int foo(UniquePtr , bool pleaseSwap)
> {
>   if (pleaseSwap) {
> UniquePtr ptr = Move(arg);
> arg = UniquePtr(new MediaFile("foo's"));
>   }
> }
>
> int main(int argc, char **argv) {
>   UniquePtr first(new MediaFile("first"));
>   printf("before first call\n");
>   foo(first, false);
>   printf("after first call\n");
>
>   UniquePtr second(new MediaFile("second"));
>   printf("before second call\n");
>   foo(second, true);
>   printf("after second call\n");
>
> }
> $ ln -sf /home/jimb/moz/dbg/mfbt mozilla
> $ g++ -std=c++11 -I . unique-inout.cc -o unique-inout
> $ ./unique-inout
> constructing first
> before first call
> after first call
> constructing second
> before second call
> constructing foo's
> destructing second
> after second call
> destructing foo's
> destructing first
> $
>
> The first MediaFile's destructor doesn't run until the end of main. The
> second MediaFile's destructor runs during the second call to foo, and then
> foo's MedialFile's destructor runs at the end of main.
>
> That's what I meant by a function taking a UniquePtr as an inout parameter.
> It seemed to me like the function Gerald imagined should be written as in
> my code above, rather than passing Move(...) as the argument. Although the
> language doesn't enforce it, I think Move should be reserved for
> unconditional transfers of ownership.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Now measuring Firefox size per-commit. What else should we be tracking?

2015-11-04 Thread Michael Layzell
I'd love it if we could get better stats for build times. Not only is it
handy for making sure they don't grow out of control, but we can also use
it to make sure that our static analysis doesn't add an unacceptable level
of overhead on compilation.

On Wed, Nov 4, 2015 at 12:10 PM, Andreas Tolfsen  wrote:

> On 4 November 2015 at 15:55, William Lachance 
> wrote:
> > Is there anything we could be tracking as part of our build or test jobs
> > that we should be? Build times are one thing that immediately comes to
> mind.
>
> Measuring build time would almost certainly be of big value, because
> it has a tendency to slip from time to time.
>
> When we start measuring this we should make sure to store the metrics
> for different compiler/linker versions and options separately so we
> can compare slowdown/speedups for different toolchains.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ feature proposal: specialize conversions for type of usage (local, member, parameter, etc.)

2015-10-12 Thread Michael Layzell
This would probably be implemented for us with the static analysis, and an
attribute MOZ_PARAMETER_ONLY_RESULT. I don't know how hard that would be to
implement, but it looks a lot like the work I'm doing in
https://bugzilla.mozilla.org/show_bug.cgi?id=1191540, so I imagine that
once I figure out how to get that one to work nicely (right now it's
misbehaving due to templates being annoying), this one will fall out pretty
easily.

IIRC another problem you were trying to solve was that casting a
nsCOMPtr which is a reference to a member, to a T* would be illegal,
while a local variable would be legal. Again, I think that could be done by
a MOZ_STACK_THIS attribute or similar, which would reject calls unless the
implicit this parameter was an l-value reference to an automatic variable
on the stack.

On Sun, Oct 11, 2015 at 7:09 AM, Aryeh Gregor  wrote:

> In bug 1193762, there's been work on eliminating the implicit
> conversion from nsCOMPtr to T*, at least for rvalues, to avoid the
> safety problem discussed there.  The problem is that then you can't
> pass an nsCOMPtr&& to a function that wants a T*, even though this
> is perfectly safe: the lifetime of the temporary is guaranteed to
> outlast the function call.  The only solution hitherto devised other
> than requiring lots of .get() is to invent a new type for refcounted
> function parameters, which is awkward.
>
> A new language feature could be used to solve this: allow conversion
> operators to behave differently based on how the variable is declared.
> For instance, it might convert differently if the source or
> destination is a local variable, global/static variable, member
> variable, or function parameter.  This would allow our problem to be
> easily solved by defining something in nsCOMPtr like:
>
>   operator T* [[parameter]]()&&;
>
> while leaving the operator deleted for non-parameters.
>
> If this can be declared on any method, or perhaps on the class itself,
> it could also be used to enforce nsCOMPtr not being used as a global
> or static variable.  Are there other places where this would be
> useful?  I don't know if this makes sense to propose as a language
> feature, but I thought it would be worth bringing up in case anyone
> has more compelling use-cases.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MOZ_CRASH-invoked crashes are now annotated

2015-10-08 Thread Michael Layzell
Bugs 1183355 and 1209958 have now landed on inbound, which means that
crashes caused by MOZ_CRASH in release builds will now be annotated, and
the annotation reasons should be visible on crash-stats. Currently the
crash will be annotated with any string literals passed to the MOZ_CRASH
macro, so MOZ_CRASH("Oh no!") will be annotated with MozCrashReason =
"MOZ_CRASH(Oh no!)".

MOZ_RELEASE_ASSERT isn't annotated yet, that will be coming with bug
1211979.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_RAII has landed on inbound

2015-09-15 Thread Michael Layzell
I've updated it with some information about MOZ_RAII.

On Mon, Sep 14, 2015 at 3:02 PM, Aaron Klotz <akl...@mozilla.com> wrote:

> Can you please update
> https://developer.mozilla.org/en-US/docs/Using_RAII_classes_in_Mozilla
> with this information?
>
> Thanks,
>
> Aaron
>
>
> On 9/12/2015 3:06 PM, Michael Layzell wrote:
>
>> Hey everyone,
>>
>> Bug 1201190 (https://bugzilla.mozilla.org/show_bug.cgi?id=1201190) just
>> landed on inbound, which means that we now have access to the new
>> annotation MOZ_RAII. This is a static-analysis annotation, intended to be
>> placed on RAII guards. It will cause the static analysis to cause
>> compilation to fail if the class is allocated anywhere other than in an
>> automatic variable - this includes temporaries, unlike MOZ_STACK_CLASS,
>> which also allows allocating the type in a temporary.
>>
>> This new analysis fills the role of the much more verbose MOZ_GUARD_OBJECT
>> annotations, which perform runtime analysis to prevent temporary
>> allocations. Unfortunately, as we currently only run static analysis on
>> Linux and Mac OS X, you should still use MOZ_GUARD_OBJECT (In addition to
>> MOZ_RAII - which is more likely to catch errors, as it checks at build
>> time
>> and is more thorough) if the class might be used from windows-specific
>> code.
>>
>> To mark a class as MOZ_RAII, simply `#include "mfbt/Attributes.h"`, and
>> then change the class declaration to `class MOZ_RAII FooGuard`.
>>
>> If you're adding any new RAII guards, please use MOZ_RAII! Thanks :D
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MozPromises are now in XPCOM

2015-08-19 Thread Michael Layzell
We have a static analysis for this right now: 
https://dxr.mozilla.org/mozilla-central/source/build/clang-plugin/tests/TestNoRefcountedInsideLambdas.cpp


And there is potential for more static analyses if we decide that we 
need them.


We also have bug 1157788 if we decide at some point that we want to 
forbid default captures in lambdas and require explicitly listing the 
capture variables.


- Michael

On 2015-08-19 8:35 AM, smaug wrote:

Hi bholley,


looks great, but a question

The example
mProducer-RequestFoo()
 -Then(mThread, __func__,
[...] (ResolveType aVal) { ... },
[...] (RejectType aVal) { ... });
uses C++ lambdas. Do we have some static analysis or such in place to
protect that
lambdas don't refer to raw pointer variables in the scope
(especially raw pointer variables pointing to ref counted objects)?
Or does MozPromise have similar setup to bug 1153295 or what?






-Olli


On 08/19/2015 06:17 AM, Bobby Holley wrote:

I gave a lightning talk at Whistler about MozPromise and a few other new
tools to facilitate asynchronous and parallel programming in Gecko.
There
was significant interest, and so I spent some time over the past few
weeks
untangling them from dom/media and hoisting them into xpcom/.

Bug 1188976 has now landed on mozilla-central, MozPromise (along with
TaskQueue, AbstractThread, SharedThreadPool, and StateMirroring) can
now be
used everywhere in Gecko.

I also just published a blog post describing why MozPromises are
great and
how they work: http://bholley.net/blog/2015/mozpromise.html

Feedback is welcome. These tools are intended to allow developers to
easily
and safely run code on off-main-thread thread pools, which is
something we
urgently need to do more of in Gecko. Go forth and write more
parallel code!

bholley



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Implement: Selection Events

2015-08-18 Thread Michael Layzell
Summary: We currently require webpages to poll the current selection 
when they want to be
notified of changes to the user's selection.This patch adds two events, 
selectstart and
selectionchange, which allow the website to detect when the selection is 
changed. selectstart
is fired when the user starts selecting, and selectionchange is fired 
when the selection

changes for any reason.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=571294

Link to standard:http://w3c.github.io/selection-api/#user-interactions

Platform coverage:All platforms.

Target release: Firefox 43.

Preference behind which this will be implemented: 
dom.select_events.enabled


DevTools bug: N/A

Do other browser engines implement this: IE, Chrome, and Safari all 
implement this API.


Security  Privacy Concerns: This API adds a new mechanism for canceling 
user selections, which

could be abused. However it was already possible.

Web designer / developer use-cases: This is a useful tool for websites 
which wish to be notified
when the user's selection changes, as currently websites have to poll 
the selection in Firefox.


Michael
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Ship: Updated third-party iframe storage behavior

2015-08-18 Thread Michael Layzell
Summary: Currently, there are inconsistent rules about the availability 
of persistent storage in third-party iframes across different types of 
storage (such as caches, IndexedDB, localstorage, sessionstorage, and 
cookies). We are looking to unify these behaviors into a consistent set 
of rules for when persistent storage should be available. We have 
modeled this after our cookie rules, and now use the cookie behavior 
preference to control third party access to these forms of persistent 
storage. This means that IndexedDB (which was previously unconditionally 
disabled in 3rd-party iframes) is now available in 3rd party iframes 
when the accept third-party cookies preference is set to Always. As 
our current definition of accepting third-party cookies from Only 
Visited makes no sense for non-cookie storage, we currently treat this 
preference for these forms of storage as though the preference was Never.


Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1184973

Link to standard: N/A.

Platform coverage: All platforms.

Target release: Firefox 43.

Preference behind which this will be implemented: None, although the 
preference
network.cookie.cookieBehavior will be used to guide the behavior of 
storage in third-party iFrames.


DevTools bug: N/A.

Do other browser engines implement this: Based on my quick testing: 
Chrome uses it's third party preference to control access to 
localStorage and sessionStorage, but not IndexedDB or caches. Safari 
appears to use it's preference to control IndexedDB, but not 
sessionStorage or localStorage. IE appears to only use its 3rd party 
preference for cookies. All other browsers allow IndexedDB in 3rd party 
iframes with default settings.


Security  Privacy Concerns: This changes how websites can store data on 
the user's machine.


Web designer / developer use-cases: Previously, we had made IndexedDB 
unavailable in 3rd-party iframes. Web developers will now be able to use 
IndexedDB in 3rd party iframes when the user has the accept cookies 
preference set to always.


Michael
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::TemporaryRef is gone; please use already_AddRefed

2015-07-08 Thread Michael Layzell
We're already working on a static analysis in 
https://bugzilla.mozilla.org/show_bug.cgi?id=1180993 which will prevent 
code like that from passing try.


Hopefully that will help with your concerns.

On 2015-07-07 12:59 PM, Seth Fowler wrote:

I brought this up on IRC, but I think this is worth taking to the list:

Replacing TemporaryRef with already_AddRefed has already caused at least one 
leak because already_AddRefed does not call Release() on the pointer it holds 
if nothing takes ownership of that pointer before its destructor runs. 
TemporaryRef did do that, and this seems like a strictly safer approach.

My question is: why doesn’t already_AddRefed call Release() in its destructor? 
The fact that it doesn’t seems to me to be a profoundly unintuitive footgun.

I don’t think we can trust reviewers to catch this, either. The fact that Foo() 
is declared as:

already_AddRefedBar Foo();

won’t necessarily be obvious from a call site, where the reviewer will just see:

Foo();

That call will leak, and if we don’t happen to hit that code path in our tests, 
we may not find out about it until after shipping it.

I’d prefer that already_AddRefed just call Release(), but if there are 
performance arguments against that, then we should be checking for this pattern 
with static analysis. I don’t think the situation as it stands should be 
allowed to remain for long.

- Seth


On Jun 30, 2015, at 12:02 PM, Nathan Froyd nfr...@mozilla.com wrote:

Bug 1161627 has landed on inbound, which converts all uses of
mozilla::TemporaryRef to already_AddRefed and removes TemporaryRef.
(already_AddRefed moved to MFBT several months ago, in case you were
wondering about the spreading of XPCOM concepts.)  TemporaryRef was added
for easier porting of code from other engines, but as it has not been used
much for that purpose and it led to somewhat less efficient code in places,
it was deemed a failed experiment.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::TemporaryRef is gone; please use already_AddRefed

2015-07-07 Thread Michael Layzell

On 2015-07-07 6:37 AM, Aryeh Gregor wrote:

Did you check whether this actually occurs in an optimized build?  C++
allows temporaries to be optimized away under some circumstances,
e.g., when returning a local variable.  It would make a lot of sense
to me if it allowed the temporary created by a ternary operator to be
optimized away.
No, I never checked if it happens on an optimized build, but as C++ 
follows an as-if principal, which means that code has to execute as if 
those temporaries had been created. Unfortunately, AddRef and Release 
are virtual, which, I'm pretty sure, means that the compiler can't 
optimize them out as they may have arbitrary side effects :(.


So the temporaries are probably eliminated, but the calls to AddRef and 
Release are probably not. I could be wrong on this though.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fwd: mozilla::TemporaryRef is gone; please use already_AddRefed

2015-07-06 Thread Michael Layzell
I suppose that the general problem is that we are using an nsRefPtr 
lvalue in one side of the branch, and an nsRefPtr xvalue on the other 
(as the nullptr is being cast to an nsRefPtr, which has to live in a 
temporary making it an xvalue). This is reasonable in a situation where 
we actually want an nsRefPtr, as when the value is used somewhere, it 
will be moved, and we won't produce any extra addref/releases. The 
problem, instead, comes from the operator T*() , which allows us to 
extract one of these references from an xvalue. I think that every use 
of this conversion will frequently be a source of an unnecessary (or 
dangerous) addref/release pair.


If we remove it, I don't think that the static analysis will be 
necessary, as the pattern of creating an unnecessary temporary, just to 
extract a value from it, should be much more visibly wrong.


  Foo* x = true ? nullptr : bar;
would look like
Foo* x = (true ? nullptr : bar).get();

which is much more obviously inefficient.


On 2015-07-06 8:49 AM, Neil wrote:

Michael Layzell wrote:

In summary, the nsRefPtr is copied into a temporary in its side of 
the conditional. The nullptr is cast to a struct Foo*, which is 
constructed into a nsRefPtr, which is bound to a temporary, and then 
moved around a bit between temporaries. The resulting temporary 
xvalue then has the raw pointer extracted from it (which uses the 
rvalue-reference cast call - the part which is causing the problem), 
which is stored locally after the temporaries are destroyed. The 
temporaries which are created in the conditional are then destroyed, 
which causes a Release(), but that is OK because there was an AddRef 
due to the extra copy in the nsRefPtr branch.


So the ternary actually causes an unnecessary AddRef/Release pair, neat.

Neat as in Can we please have a static analysis to detect this 
please?




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: mozilla::TemporaryRef is gone; please use already_AddRefed

2015-07-03 Thread Michael Layzell
With regard to what #2 is doing, I threw the following into a file with
nsRefPtr, and got clang to dump the AST:

struct Foo {void AddRef() {} void Release() {}};
nsRefPtrFoo bar;
void foo() {
  Foo* x = true ? nullptr : bar;
}

`-FunctionDecl 0x103805750 line:943:1, line:945:1 line:943:6 foo 'void
(void)'
  `-CompoundStmt 0x103807708 col:12, line:945:1
`-DeclStmt 0x1038076f0 line:944:3, col:32
  `-VarDecl 0x103805800 col:3, col:29 col:8 x 'struct Foo *' cinit
`-ExprWithCleanups 0x1038076d8 col:12, col:29 'struct Foo *'
  `-ImplicitCastExpr 0x1038076c0 col:12, col:29 'struct Foo *'
UserDefinedConversion
`-CXXMemberCallExpr 0x103807698 col:12, col:29 'struct Foo *'
  `-MemberExpr 0x103807668 col:12, col:29 'bound member
function type' .operator Foo * 0x1038048a0
`-ImplicitCastExpr 0x103807650 col:12, col:29 'const
class nsRefPtrstruct Foo' NoOp
  `-ConditionalOperator 0x103807618 col:12, col:29
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo'
|-CXXBoolLiteralExpr 0x103805858 col:12 '_Bool' true
|-CXXBindTemporaryExpr 0x1038074a8 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' (CXXTemporary
0x1038074a0)
| `-CXXConstructExpr 0x103807468 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' 'void (nsRefPtrstruct
Foo )' elidable
|   `-MaterializeTemporaryExpr 0x103807450 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' xvalue
| `-CXXBindTemporaryExpr 0x103807358 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' (CXXTemporary
0x103807350)
|   `-CXXConstructExpr 0x103807318 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' 'void (nsRefPtrstruct
Foo )' elidable
| `-MaterializeTemporaryExpr 0x103807300
col:19 'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' xvalue
|   `-CXXBindTemporaryExpr 0x103806f38 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' (CXXTemporary
0x103806f30)
| `-ImplicitCastExpr 0x103806f10 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' ConstructorConversion
|   `-CXXConstructExpr 0x103806ed8 col:19
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' 'void (struct Foo *)'
| `-ImplicitCastExpr 0x103806ec0
col:19 'struct Foo *' NullToPointer
|   `-CXXNullPtrLiteralExpr 0x103805870
col:19 'nullptr_t'
`-CXXBindTemporaryExpr 0x1038075f8 col:29
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' (CXXTemporary
0x1038075f0)
  `-CXXConstructExpr 0x1038075b8 col:29
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' 'void (const
nsRefPtrstruct Foo )'
`-ImplicitCastExpr 0x1038075a0 col:29 'const
nsRefPtrstruct Foo':'const class nsRefPtrstruct Foo' lvalue NoOp
  `-DeclRefExpr 0x103805888 col:29
'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo' lvalue Var 0x103084da0
'bar' 'nsRefPtrstruct Foo':'class nsRefPtrstruct Foo'

In summary, the nsRefPtr is copied into a temporary in it's side of the
conditional. the nullptr is cast to a struct Foo*, which is constructed
into a nsRefPtr, which is bound to a temporary, and then moved around a bit
between temporaries. The resulting temporary xvalue then has the raw
pointer extracted from it (which uses the rvalue-reference cast call - the
part which is causing the problem), which is stored locally after the
temporaries are destroyed. The temporaries which are created in the
conditional are then destroyed, which causes a Release(), but that is OK
because there was an AddRef due to the extra copy in the nsRefPtr branch.

So the ternary actually causes an unnecessary AddRef/Release pair, neat.
The problem here appears to be that when deciding the type of the ternary,
c++ chooses nsRefPtrFoo, rather than Foo*. Adding the get() makes C++
choose the correct type for the ternary, and avoids the cast of the rvalue
reference.


On Wed, Jul 1, 2015 at 3:48 PM, Nathan Froyd nfr...@mozilla.com wrote:

 On Wed, Jul 1, 2015 at 7:39 AM, Aryeh Gregor a...@aryeh.name wrote:

  On Wed, Jul 1, 2015 at 6:24 AM, Joshua Cranmer [image: ] 
 pidgeo...@gmail.com
  wrote:
   You'd get the same benefit, I think, by making operator T*()  =
  delete;, syntax which is accepted on gcc 4.8.1+, clang, and MSVC 2015
 IIRC.
 
  I once tried this and found it had problematic side effects that made
  deploying it difficult in practice.  I think one involved the ternary
  operator.  Unfortunately, I can't remember exactly what they were.  :(
   Anyone who's interested can try for themselves by just adding that
  line to nsCOMPtr.h, recompiling, and seeing what breaks.
 

 I tried this, fixed a few compilation errors, then decided this wasn't
 worth it just yet and threw my work into a bug.  Comments 

Re: mozilla::TemporaryRef is gone; please use already_AddRefed

2015-07-03 Thread Michael Layzell
On Fri, Jul 3, 2015 at 8:56 AM, Neil n...@parkwaycc.co.uk wrote:

 Sure, but that won't stop someone from writing ArgFoo foo =
 ReturnFoo2(); will it?

 Something like that is fairly trivial for a static analysis system like
our clang plugin to catch, if we want to create a type like that.
Restricting certain types to only exist in temporaries and parameter
declarations would not be too hard.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to migrate the permissions database to use origins instead of host names

2015-07-01 Thread Michael Layzell
The patches I am working on already use Bobby Holley's OriginAttributes, in
fact we use the origin attribute on the nsIPrincipal, and only expose an
nsIPrincipal from the API.

Internally, we use the origin attribute for serialization, but to external
consumers of the API, all that is available are APIs for adding and
removing permissions VIA nsIPrincipal, and some convenience methods which
exist for legacy reasons which allow you to add/remove permissions via
nsIURIs, creating NoAppCodebasePrincipals for them. You are right, that if
we decide that double keying is something we want, that it would be
possible to add it to the permission manager by simply modifying
nsIPrincipal's OriginAttributes.

In terms of a mechanism for DENY being treated differently, it would be
possible (albeit very hacky) to always insert DENY entries for URIs with a
host property at http://HOST, and then, during the lookup, also check
http://HOST, and if it is DENY, expose a DENY for the other origin. I don't
really like this solution, as it creates lots of edge cases and other
complications which I'm not sure if we want to expose to API consumers, as
well as making the semantics of permissions diverge from nsIPrincipals,
(the coherence of the new semantics with nsIPrincipal is something I quite
like).

On Tue, Jun 30, 2015 at 8:53 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jun 30, 2015 at 3:55 PM, Martin Thomson m...@mozilla.com wrote:
  I wonder, has the subject of double-keying been raised in this
  context?  It comes up frequently in this context. And when I say
  double-keying, I mean forming a key from the tuple of the requesting
  principal and the top level browsing context principal (though origin
  may suffice).
 
  If there are disruptive changes afoot, then segregating based on what
  is shown to the user might be sensible.

 Bobby Holley has added infrastructure on nsIPrincipal called
 OriginAttributes which is intended to be an extension hook to allow
 things like double keying. As long as we use the 'origin' attribute on
 nsIPrincipal, and make sure that all callers pass in an nsIPrincipal
 rather than an nsIURI, then we should be able to relatively easy add
 double keying in the future.

 / Jonas

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revisiting modelines in source files

2015-06-17 Thread Michael Layzell



On 2015-06-17 1:04 PM, Andrew McCreight wrote:

As Boris said, for our particular emacs modeline there is no prompt.


Actually, in some JS files I'm getting a prompt when opening them - 
js-indent-level isn't considered a safe variable by emacs. (although 
it's pretty easy to mark them as safe, and not have emacs prompt you 
again in the future).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform