Re: Adopting the black Python code style

2020-10-20 Thread Jeff Gilbert
Well we generally don't seek consensus anymore for these sorts of
things, it seems, but it's reassuring that I'm not alone.

On Tue, Oct 20, 2020 at 1:17 AM James Graham  wrote:
>
> On 19/10/2020 22:01, Jeff Gilbert wrote:
> > I'm disappointed by that.
>
> FWIW last time I looked at black, I found that the compromises it made
> to be fully automatic and with minimal configuration meant that it was
> liable to produce ugly or difficult to read code in some situations.
>
> I understand that we've decided that people will get used to reading any
> code style over time, and therefore eliminating formatting concerns from
> the code writing process is a net win for productivity. So I'm not
> taking a position in opposition to this proposal, but it is not
> something I would have advocated personally.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adopting the black Python code style

2020-10-19 Thread Jeff Gilbert
I'm disappointed by that.

On Mon, Oct 19, 2020 at 2:00 PM Ricky Stewart  wrote:
>
> On Monday, October 19, 2020 at 8:53:59 AM UTC-5, Andrew Halberstadt wrote:
> > No, black now has a `--skip-string-normalization` flag, which I would be
> > all for using in our code base. Not sure if that was the plan here or not.
> >
> > -Andrew
> >
> > p.s It took a great deal of convincing from the community to get this flag
> > added, as the maintainers precisely wanted to prevent conversations like
> > this current one from happening :p.
> > On Thu, Oct 15, 2020 at 5:23 PM Mike Hommey  wrote:
> >
> > > Is black still opiniated about string types and insisting to use double
> > > quotes, when we mostly settled on single quotes?
> > >
> > > On Mon, Oct 12, 2020 at 10:00:56AM -0700, Ricky Stewart wrote:
> > > > Hello everyone,
> > > >
> > > > If you don't write Python code in mozilla-central, you can stop reading
> > > now.
> > > >
> > > > On October 19, 2020 we will be officially adopting the black Python
> > > style for all our Python code in mozilla-central.
> > > >
> > > > black (https://black.readthedocs.io/en/stable/) is an opinionated,
> > > fast, and correct auto-formatter for Python. It is an increasingly popular
> > > autoformatter which might be considered the de facto standard for Python
> > > code (like clang-format and jslint are for C++ and JS). It is already used
> > > by several Mozilla projects, including Release Engineering, Lando, and
> > > moz-phab.
> > > >
> > > > black makes it easy for us to reliably format all our Python code in a
> > > consistent way, making the codebase easier to read on the whole and
> > > allowing us to spend more time in code review discussing substantive 
> > > issues
> > > over trivial formatting matters.
> > > >
> > > > This policy change will affect all Python code in-tree, including
> > > sandboxed Python code used by the build system (.configure, .build, and
> > > .mozbuild files).
> > > >
> > > > As part of this policy change, we plan on doing a one-time auto-reformat
> > > on October 19 of all Python code in the entire repository. In addition,
> > > mach lint (
> > > https://firefox-source-docs.mozilla.org/code-quality/lint/linters/black.html)
> > > and reviewbot will be updated to print warnings for Python source files
> > > that violate the black style. Just like with C/C++ or Rust, we won’t
> > > backout offending changes but instead will do regular refreshes of the 
> > > tree.
> > > >
> > > > If there are any questions, please let me know!
> > > >
> > > > Ricky
> > > > ___
> > > > dev-platform mailing list
> > > > dev-pl...@lists.mozilla.org
> > > > https://lists.mozilla.org/listinfo/dev-platform
> > > ___
> > > dev-platform mailing list
> > > dev-pl...@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
>
> --skip-string-normalization does exist, but the plan is not to use it. While 
> there is a cost associated with reformatting many (but not all) of the 
> strings in our Python code to be consistent with the black style, we don't 
> think that that cost outweighs the benefit of doing so.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't use locale-dependent C standard library functions (was: Re: Please don't use functions from ctype.h and strings.h)

2020-06-12 Thread Jeff Gilbert
It would be great to have CI linting for these!

On Fri, Jun 12, 2020 at 2:19 AM Henri Sivonen  wrote:
>
> This is an occasional re-reminder that anything in the C standard
> library that is locale-sensitive is fundamentally broken and should
> not be used.
>
> Today's example is strerr(), which returns a string that is meant to
> be rendered to the user, but the string isn't guaranteed to be UTF-8.
>
> On Mon, Aug 27, 2018 at 3:04 PM Henri Sivonen  wrote:
> >
> > Please don't use the functions from ctype.h and strings.h.
> >
> > See:
> > https://daniel.haxx.se/blog/2018/01/30/isalnum-is-not-my-friend/
> > https://daniel.haxx.se/blog/2008/10/15/strcasecmp-in-turkish/
> > https://stackoverflow.com/questions/2898228/can-isdigit-legitimately-be-locale-dependent-in-c
> >
> > In addition to these being locale-sensitive, the functions from
> > ctype.h are defined to take (signed) int with the value space of
> > *unsigned* char or EOF and other argument values are Undefined
> > Behavior. Therefore, on platforms where char is signed, passing a char
> > sign-extends to int and invokes UB if the most-significant bit of the
> > char was set! Bug filed 15 years ago!
> > https://bugzilla.mozilla.org/show_bug.cgi?id=216952 (I'm not aware of
> > implementations doing anything surprising with this UB but there
> > exists precedent for *compiler* writers looking at the standard
> > *library* UB language and taking calls into standard library functions
> > as optimization-guiding assertions about the values of their
> > arguments, so better not risk it.)
> >
> > For isfoo(), please use mozilla::IsAsciiFoo() from mozilla/TextUtils.h.
> >
> > For tolower() and toupper(), please use ToLowerCaseASCII() and
> > ToUpperCaseASCII() from nsUnicharUtils.h
> >
> > For strcasecmp() and strncasecmp(), please use their nsCRT::-prefixed
> > versions from nsCRT.h.
> >
> > (Ideally, we should scrub these from vendored C code, too, since being
> > in third-party code doesn't really make the above problems go away.)
> >
> > --
> > Henri Sivonen
> > hsivo...@mozilla.com
>
>
>
> --
> Henri Sivonen
> hsivo...@mozilla.com
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-25 Thread Jeff Gilbert
I don't know. If nothing else, it looks like it's possible to run macOS in
a VM in some cases, so maybe that's worth someone exploring. Unfortunately,
the upcoming Mac Pros look to be over four times as expensive as my machine.

On Thu, Oct 24, 2019 at 5:55 PM Marcos Caceres  wrote:

> On Friday, October 25, 2019 at 4:14:05 AM UTC+11, Jeff Gilbert wrote:
> > My home Ryzen 3900x builds windows debug clobbers in under 10 minutes.
>
> Whoa, that's awesome!
>
> > Beefy desktops are relatively cheap, and I believe we continue to
> encourage
> > firefox-builders to request desktop workstations for builds.
>
> Do you know if it's possible to cross compile on one of those for to
> target MacOS? I'd like to build on, and for, MacOS.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-24 Thread Jeff Gilbert
My home Ryzen 3900x builds windows debug clobbers in under 10 minutes.
Beefy desktops are relatively cheap, and I believe we continue to encourage
firefox-builders to request desktop workstations for builds.

On Wed, Oct 23, 2019 at 11:20 PM Marcos Caceres 
wrote:

> On Thursday, October 24, 2019 at 6:46:08 AM UTC+11, Christopher Manchester
> wrote:
> > As announced in this week's project call, sccache-dist schedulers have
> been
> > deployed to mozilla offices, and those with hardware available can enroll
> > servers based on the documentation here
> > <
> https://firefox-source-docs.mozilla.org/build/buildsystem/sccache-dist.html#steps-for-setting-up-a-server
> >.
>
> Just a reminder to not forget about us remotees! We are dying out here
> with 1+ hour compile times so anything would really help (even 20-20%
> improvement would go a long way). A lot of us have spare old macs, and if
> we can put them to good use with this, that would be amazing.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using MozPhab without Arcanist

2019-10-23 Thread Jeff Gilbert
Likewise I use `phlay` on git.

On Wed, Oct 23, 2019 at 1:35 PM Jörg Knobloch  wrote:

> On 23 Oct 2019 22:24, Emma Humphries wrote:
> > Will this make it easier for non-staff contributors to get their change
> > sets reviewed and landed?
> >
> > What else should we be doing for that.
> >
> > I have some ideas I'm working on for views in Bugzilla to help with that
> so
> > that contributors can see what's going on, and where they can take action
> > to help.
>
> Greetings from the Thunderbird sheriff!
>
> Just for the record: I'm using the Mercurial extensions 'hg phabsend'
> and 'hg phabread' as first described in Kris' post entitled "Re: Stopgap
> for Commit Series in Phabricator" on 26th July 2018 (yes, one - eight).
>
> That takes three minutes to set up and will work on any platform, also
> on Windows, where setting up Arcanist is apparently a 12-step
> process[1]. I don't know why these excellent extensions are not
> advertised more widely. Thanks to Ian Moody [:KWan] who pointed them out
> to me.
>
> 'hg phabsend' will complain if the reviewer isn't spelled correctly, I
> haven't tried without reviewer.
>
> Jörg.
>
> [1] https://moz-conduit.readthedocs.io/en/latest/arcanist-windows.html
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2019-10-22 Thread Jeff Gilbert
We that's bleak. Get pumped for a surge in "Download our desktop app for
the full experience"!

Rather than spending our limited resources giving guiding technical
feedback to proposals we fundamentally oppose, I would rather see us
putting effort into satisfying the target user stories with a less
catastrophic proposal.

A bunch of senior engineering has chimed in here, but I would love to know
that this is being surfaced to upper leadership.

Thanks for keeping us all apprised, Botond.

On Tue, Oct 22, 2019 at 1:55 PM Botond Ballo  wrote:

> Hi folks,
>
> I wanted to give an update on the "web_view" C++ standard library proposal.
>
> I have relayed the feedback received on this thread on multiple
> occasions, and our concerns about this proposal as a browser
> implementer have been noted by the committee. However, the proposal
> has been received positively by other participants of the committee,
> including other browser vendors, and is continuing to be developed and
> reviewed. While it's still at an early stage (still in front of the
> Library Evolution Incubator group), it is being actively developed and
> the proposed API is becoming more fleshed out.
>
> Given that, would anyone be interested in reviewing the proposed API
> and providing feedback on its design? I feel like the committee would
> be receptive to constructive technical feedback, and as a group with
> experience in developing embedding APIs, we are in a particularly good
> position to provide such feedback.
>
> The latest draft of the proposal can be found here:
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1108r4.html
>
> Thanks!
> Botond
>
> On Wed, Jul 18, 2018 at 12:45 PM Botond Ballo  wrote:
> >
> > Hi everyone,
> >
> > With the proposal for a standard 2D graphics library now on ice [1],
> > members of the C++ standards committee have been investigating
> > alternative ways of giving C++ programmers a standard way to write
> > graphical and interactive applications, in a way that leverages
> > existing standards and imposes a lower workload on the committee.
> >
> > A recent proposal along these lines is for a standard embedding
> > facility called "web_view", inspired by existing embedding APIs like
> > Android's WebView:
> >
> > http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
> >
> > As we have some experience in the embedding space here at Mozilla, I
> > was wondering if anyone had feedback on this embedding library
> > proposal. This is an early-stage proposal, so high-level feedback on
> > the design and overall approach is likely to be welcome.
> >
> > Thanks!
> > Botond
> >
> > [1]
> https://botondballo.wordpress.com/2018/06/20/trip-report-c-standards-meeting-in-rapperswil-june-2018/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Naming parameters in lambda expressions

2019-09-06 Thread Jeff Gilbert
Notably, aFoo is neither Google nor Rust style. If we didn't already have
aFoo as a style, we certainly wouldn't adopt it today.

On Fri, Sep 6, 2019 at 6:59 AM Botond Ballo  wrote:

> On Fri, Sep 6, 2019 at 9:40 AM Andrew Sutherland
>  wrote:
> > But of course, if this was all being done from inside an editor or a
> > debugger, no matter what tricks searchfox can do, they can't help you
> > elsewhere.
>
> Editors can of course do things of their own to help you on this
> front. For example, Eclipse CDT has semantic highlighting and can
> highlight parameters differently from local variables. The C++
> language server clangd (which is designed to be able to plug into any
> editor) recently acquired this ability as well.
>
> I'm not saying this is a reason to retire the 'aFoo' convention (I'd
> wait until technologies like the above are a bit more ubiquitous
> before doing that), just mentioning the above tools because I've found
> them very useful for editing Mozilla C++ code.
>
> Cheers,
> Botond
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Naming parameters in lambda expressions

2019-09-05 Thread Jeff Gilbert
I remain against aFoo style. I still think it's low-signal-high-noise, and
doesn't provide value that modern syntax highlighting doesn't provide for
those who find it useful.

I'm absolutely opposed to adding a new prefix. That would be moving even
further down the path of our proprietary code-smell.

On Thu, Sep 5, 2019 at 6:15 AM Simon Giesecke  wrote:

> Hi,
>
> we encountered the question of how to name parameters in lambda
> expressions.
>
> For regular functions, the coding style implies that parameter naming
> should use camelCase with an "a" prefix, and this is also widely done
> this way. The coding style does not say anything specifically
> concerning lambda expressions at the moment. Actual uses differ at
> least between using the a prefix and using no prefix.
>
> Since in most cases, lambda expressions occur within a regular
> function, using the a prefix leads to some ambiguity when reading the
> code, as one cannot immediately tell whether an a-prefixed identifier
> is a parameter to the lambda expression or to the enclosing function
> (which may be captured).
>
> For example, one might have:
>
> bool MyFunction(const nsTArray& aFoos, const nsCString &aId) {
>   if (std::any_of(aFoos.begin(), aFoos.end(), [aId](const Foo& aFoo) {
>  return aFoo.Id() == aId;
>   }) {
> // do something more...
> return true;
>   }
>   return false;
> }
>
> This is a simple case, where this might not be particularly
> problematic, but I find it quite confusing to use aFoo as a lambda
> parameter name. Obviously, when not using a prefix, similar
> ambiguities with local variable names in the enclosing function may
> arise. For some subjective reason, I find this less confusing, but
> this may be different for others. Using a different prefix, e.g. l,
> leading to calling the lambda parameter lFoo, would resolve this
> ambiguity, but one might feel there are too many different prefixes.
>
> While I have some personal preference against the a prefix here, I
> think any of these options were acceptable. Maybe this has already
> been discussed and concluded before, but not put into the coding style
> document?
>
> Simon
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming C++ standards meeting in Cologne

2019-07-30 Thread Jeff Gilbert
I want to underline how insane this is:
"...the groups which looked at [the web_view] proposal [...] largely viewed
it favourably, a promising way of allow C++ applications to do things like
graphical output without having to standardize a graphics API ourselves, as
previously attempted."

I feel like I'm taking crazy pills. I would never have guessed that any
committee would have thought the failure of the graphics API proposal was
that it didn't go far enough. I don't think there could be a brighter
possible neon sign that library integration and ecosystem ergonomics are so
horrendous that one of the proposed remedies for "it's hard to integrate
[skia,cairo,opengl] with my project" is "let's effectively include an
existing operating system in the standard library".

I should also mention that as a non-majority browser engine developer, this
is an existential threat to Mozilla, and in turn, the web.

What can I do to help keep things on the tracks here?

On Sat, Jul 27, 2019 at 10:42 AM Botond Ballo  wrote:

> Hi folks!
>
> If you're interested in some more details about what happened at last
> week's meeting, my blog post about it is now available (also on
> Planet):
>
>
> https://botondballo.wordpress.com/2019/07/26/trip-report-c-standards-meeting-in-cologne-july-2019/
>
> Cheers,
> Botond
>
> On Sat, Jul 13, 2019 at 12:25 AM Botond Ballo  wrote:
> >
> > Hi everyone!
> >
> > The next meeting of the C++ Standards Committee (WG21) will be July
> > 15-20 in Cologne, Germany.
> >
> > (Apologies for not sending this announcement sooner!)
> >
> > This is a particularly important meeting because the committee aims to
> > publish the C++20 Committee Draft, a feature-complete draft of C++20
> > that will be sent out for balloting and comment from national
> > standards bodies, at the end of this meeting. C++20 is probably going
> > to be the language's largest release since C++11, containing an
> > extensive lineup of significant new features (Concepts, Modules,
> > Coroutines, Contracts, Ranges, and default comparisons (<=>), among
> > others). Modules has just merged into the working draft at the last
> > meeting, and continues to be under active discussion due to tooling
> > impacts. Contracts are at risk of being pulled due to controversy.
> > Even for uncontroversial features, technical issues that we can't fix
> > in time are sometimes discovered and result in the feature being
> > bumped to the next release. All in all, this is going to be a busy and
> > eventful meeting.
> >
> > If you're curious about the state of C++ standardization, I encourage
> > you to check out my blog posts where I summarize each meeting in
> > detail (most recent one here [1]), and the list of proposals being
> > considered by the committee (new ones since the last meeting can be
> > found here [2] and here [3]).
> >
> > I will be attending this meeting, likely splitting my time between the
> > Evolution Working Group (where new language features are discussed at
> > the design level) and interesting Study Groups such as SG7
> > (Reflection) and SG15 (Tooling). As always, if there's anything you'd
> > like me to find out for you at the meeting, or any feedback you'd like
> > me to communicate, please let me know!
> >
> > Finally, I encourage you to reach out to me if you're thinking of
> > submitting a proposal to the committee. I'm always happy to help with
> > formulating and, if necessary, presenting a proposal.
> >
> > Cheers,
> > Botond
> >
> > [1]
> https://botondballo.wordpress.com/2019/03/20/trip-report-c-standards-meeting-in-kona-february-2019/
> > [2]
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/#mailing2019-03
> > [3]
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/#mailing2019-06
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: non-const reference parameters in new and older code

2019-07-22 Thread Jeff Gilbert
I could grudgingly get on board with that, though I feel that there
are sort of two levels of mutability I use: casual and essential.
Essential is protected by constness, whereas casual is sort of
everyday minor changes, but changes I don't want to allow in `const`
code, thus don't want `mutable`. It's subjective, but has been useful
to handle mutability trade-offs. (since we don't have multiple
constnesses, Views and factories are sometimes alternatives)

I really really like the non-null annotation reference provides, since
pointers (especially in headers without diving into source code) are
ambiguously nullable.

On Mon, Jul 22, 2019 at 7:29 AM David Teller  wrote:
>
> I believe in least surprise for the caller of an API. This seems to
> match with the Google style, as you describe it: any parameter which may
> be mutated in any manner should be passed as pointer, rather than reference.
>
> Cheers,
>  David
>
> On 22/07/2019 08:43, Karl Tomlinson wrote:
> > https://google.github.io/styleguide/cppguide.html#Reference_Arguments
> > has a simple rule to determine when reference parameters are
> > permitted:
> > "Within function parameter lists all references must be const."
> > This is consistent with Mozilla's previous coding style:
> > "Use pointers, instead of references for function out parameters,
> > even for primitive types." [1]
> > However, across Gecko there are different interpretations of what
> > "out" parameter means.
> >
> > The Google style considers a parameter to be an out parameter if
> > any of its state may be mutated by the callee.
> > In some parts of Gecko, a parameter is considered an out parameter
> > only if the callee might make wholesale changes to the state of
> > parameter.  Well before the announcement to switch to Google style,
> > this interpretation was discussed in 2017 [2], with subsequent
> > discussion around which types were suitable as non-const reference
> > parameters.
> >
> > I'm asking how should existing surrounding code with some
> > different conventions affect when is it acceptable to follow
> > Google style for reference or pointer parameters?
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NotNull and pointer parameters that may or may not be null

2019-07-22 Thread Jeff Gilbert
FWIW RefPtr already behaves as MaybeNull for assert-on-deref, making
them functionally Maybe. (though they don't represent a contract
that it's nullable)

For non-outvar-args, I tend to use references to document non-null,
though I do so even for many non-const uses. (balancing what is and
isn't `mutable`/non-const is hard. Casual vs important side-effects)
References obviously should not be null, which makes the non-null
contract pretty obvious, though there's no assert for `*(T*)foo`. Like
you say though, null-derefs are self-asserting. :)

Outvars are always pointers for me, non-null unless documented
otherwise. (I should probably be using NotNull, but again,
self-asserting)

I agree that it's often not clear when args are actually nullable,
with the exception of dom::Nullable coming out of bindings code. Maybe
we should move Nullable to mfbt? I like the idea of annotating these
contracts. (Possibly also a NullableRefPtr? I occasionally use
Maybe> for this, but the double-deref it requires is sort of
awkward)

If we want a pure annotation, we could always `#define NULLABLE`!

On Mon, Jul 22, 2019 at 12:00 AM Karl Tomlinson  wrote:
>
> Google style requires pointers for parameters that may be mutated
> by the callee, which provides that the potential mutation is
> visible at the call site.  Pointers to `const` types are
> permitted, but recommended when "input is somehow treated
> differently" [1], such as when a null value may be passed.
>
> Comments at function declarations should mention
> "Whether any of the arguments can be a null pointer." [2]
>
> Some Gecko code has instead sometimes used a different
> reference/pointer convention to identify parameters which can be
> null.  With Google style, there is the question of whether to use
> a different mechanism to indicate whether or not pointer parameter
> may be null.
>
> If a method signature wants a pointer to an object of a particular
> type, then I would usually expect to need to provide a pointer to
> an object, unless documented otherwise.  I wonder whether in the
> general case, possibly-null pointers are more the exception than
> the rule?  Does use of NotNull add much value if most pointer
> parameters are assumed not null, but not explicitly wrapped?
>
> I wonder whether the reverse kind of wrapping would be more
> useful?  A MaybeNull type was previously contemplated but
> considered more complex and less obviously useful [3].
>
> A NotNull wrapping of the parameter would catch nulls when set in
> debug builds.  A MaybeNull wrapping could catch nulls when
> dereferenced.  Assertions don't really seem necessary to me
> because we are almost certain to find out anyway if code that
> expects non-null receives a null pointer.  Behavior is actually
> undefined, but I don't recall ever seeing a small null offset
> dereference bug rated worse than sec-low.  The primary benefit of
> a wrapping would be to annotate expectations and perhaps even
> require more though about what those are.
>
> mozilla::NotNull was introduced in 2016 [4] but hasn't seen much
> adoption.  I found very few header files that use NotNull
> consistently, but looked at the ones with most usage.
>
> At the time that NotNull usage was introduced to
> image/DecoderFactory.h [5], it was applied to eight parameters.
> Two pointer parameters did not get wrapped with NotNull.  Of those
> the `const char* aMimeType` parameter on the public method was
> assumed non-null by the implementation.  The other parameter was
> for a private method that permits `RasterImage* aImage` to be null.
> Since then, two additional `IDecodingTask** aOutTask` pointer
> parameters have been added, each of which is assumed non-null.
>
> intl/Encoding.h [6] has seven uses of NotNull in return types.
> In the C++ API exposed, I found five instances of pointers in
> return values where NotNull was not applied: each used nullptr as
> a sentinel.
>
> It seems that NotNull is particularly useful for Encoding because
> sometimes there may be a sentinel value and other times not.
> A MaybeNull alternative would be at least as useful in this case.
>
> I'm inclined to skip NotNull unless there is a special reason to
> use it.  Anyone have reason to feel differently?
>
> [1]
> https://google.github.io/styleguide/cppguide.html#Reference_Arguments
> [2]
> https://google.github.io/styleguide/cppguide.html#Function_Comments
> [3]
> https://groups.google.com/d/msg/mozilla.dev.platform/_jfnwDvcvN4/rl6c3TcFAQAJ
> [4]
> https://groups.google.com/d/msg/mozilla.dev.platform/6M_afibWhBA/MCmrYktaBgAJ
> [5]
> https://searchfox.org/mozilla-central/rev/f6528fc8520d47e507877da3dda798ab57385be2/image/DecoderFactory.h#93
> [6]
> https://searchfox.org/mozilla-central/rev/4fbcfe6fecf75d7b828fbebda79a31385f7eab5f/intl/Encoding.h
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___

Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-05 Thread Jeff Gilbert
dom/canvas has enabled -Werror=implicit-int-conversion since 68! :)
https://bugzilla.mozilla.org/show_bug.cgi?id=1540357

On Fri, Jul 5, 2019 at 11:15 AM Chris Peterson  wrote:
>
> On 7/5/2019 10:39 AM, Gijs Kruitbosch wrote:
> >> FWIW once in a while I have come across bugs caused by truncation of
> >> integers where someone picked a specific size that was too small also,
> >> e.g.
> >> storing an offset into a text node in a 16-bit integer.  I think that's
> >> maybe something that's hiding between the lines there, being careful with
> >> that direction also if you pick a type with a specific size to make sure
> >> your type is large enough.
> >
> > Yep. Recent example: https://bugzilla.mozilla.org/show_bug.cgi?id=1556019 .
>
> If integer truncation bugs are something we're really concerned about,
> clang 8 added a new -Wimplicit-int-conversion (and
> -Wimplicit-float-conversion) warning. Unfortunately, there are a couple
> thousand instances of these warnings in mozilla-central. I don't know if
> fixing them is practical, but they could be selectively enabled (or
> disabled) for individual directories.
>
> https://clang.llvm.org/docs/DiagnosticsReference.html#wimplicit-int-conversion
>
> warning: higher order bits are zeroes after implicit conversion
> warning: implicit conversion loses integer precision: A to B
>
> warning: implicit conversion loses floating-point precision: A to B
> warning: implicit conversion when assigning computation result loses
> floating-point precision: A to B
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-04 Thread Jeff Gilbert
Yes I intend to write precisely that, if we ban unsigned types.
However I'm not really convinced that throwing out unsigned types is
the right move.

For instance, one of the optimizations mentioned in the linked video
seems to not mention that using (unsigned!) size_t instead of uint32_t
(like you're sort of supposed to) should also improve code generation
in the same way as using int32_t.

It is, however, super poignant to me that uint32_t-indexing-on-x64 is
pessimal, as that's precisely what our ns* containers (nsTArray) use
for size, /unlike/ their std::vector counterparts, which will be using
the more-optimal size_t.

I'm not saying there's nothing wrong with unsigned types, but rather
than I feel their faults are being overblown, and the arguments
against them feel cherry-picked, which in a language like C++, is easy
to do.

On Thu, Jul 4, 2019 at 4:45 PM Gerald Squelart  wrote:
>
> (Glad I started this discussion; thank you Nathan for the enlightening links, 
> I need to review all my code now!)
>
> Jeff, maybe what we need is a new value type that advertizes that it's 
> unsigned, but doesn't have the unwanted 2^N wrapping (and its effects on 
> bug-finding tools and compiler optimizations).
> `class Unsigned { int mValue; /* magic API here */ }` -- feels like unsigned, 
> but underneath it's all `int` arithmetics, with optional >=0 assertions.
> Would that help?
>
> Gerald
>
>
> On Friday, July 5, 2019 at 5:35:30 AM UTC+10, Jeff Gilbert wrote:
> > That's what CheckedInt is for, and that's what we use.
> >
> > The problems webgl deals with aren't arithmatic. Arithmatic is easy.
> > (CheckedInt!) Reasoning about constraints is hard.
> >
> > We have some entrypoints where negative values are valid, and many
> > where they are not. It's really nice to have a natural way to document
> > which we expect /at compile time/. Saying "no unsigned types" really
> > throws out the baby with the bathwater for me.
> >
> > On Thu, Jul 4, 2019 at 11:46 AM Botond Ballo  wrote:
> > >
> > > On Thu, Jul 4, 2019 at 2:03 PM Jeff Gilbert  wrote:
> > > > It's a huge
> > > > help to have a compile-time constraint that values can't be negative.
> > >
> > > The question is, how useful is that guarantee. Suppose you have some
> > > code that decrements an integer too far, past zero. Instead of having
> > > a -1 you'll have a 4294967295. Is that an improvement? Will it give
> > > the code saner behaviour than the -1?
> > >
> > > Cheers,
> > > Botond
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-04 Thread Jeff Gilbert
That's what CheckedInt is for, and that's what we use.

The problems webgl deals with aren't arithmatic. Arithmatic is easy.
(CheckedInt!) Reasoning about constraints is hard.

We have some entrypoints where negative values are valid, and many
where they are not. It's really nice to have a natural way to document
which we expect /at compile time/. Saying "no unsigned types" really
throws out the baby with the bathwater for me.

On Thu, Jul 4, 2019 at 11:46 AM Botond Ballo  wrote:
>
> On Thu, Jul 4, 2019 at 2:03 PM Jeff Gilbert  wrote:
> > It's a huge
> > help to have a compile-time constraint that values can't be negative.
>
> The question is, how useful is that guarantee. Suppose you have some
> code that decrements an integer too far, past zero. Instead of having
> a -1 you'll have a 4294967295. Is that an improvement? Will it give
> the code saner behaviour than the -1?
>
> Cheers,
> Botond
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-04 Thread Jeff Gilbert
I really, really like unsigned types, to the point of validating and
casting into unsigned versions for almost all webgl code. It's a huge
help to have a compile-time constraint that values can't be negative.
(Also webgl has implicit integer truncation warnings-as-errors, so we
don't really worry about mixed-signedness)

If we insist on avoiding standard uint types, I'll be writing uint31_t wrappers.

If we're going to recommend against uint types, I would like to see
specific compelling examples of problems with them, not just prose
about "many people say" or "maybe missed optimizations".

On Thu, Jul 4, 2019 at 8:11 AM Botond Ballo  wrote:
>
> On Thu, Jul 4, 2019 at 7:11 AM Henri Sivonen  wrote:
> > > Do you happen to know why?  Is this due to worries about underflow or
> > > odd behavior on subtraction or something?
> >
> > I don't _know_, but most like they want to benefit from optimizations
> > based on overflow being UB.
>
> My understanding is yes, that's one of the motivations.
>
> Another, as hinted at in Gerald's quote, is that tools like UBSan can
> diagnose and catch signed overflow because it's undefined behaviour.
> They can't really do that for unsigned overflow because, since that's
> defined to wrap, for all the tool knows the code author intended for
> the overflow and wrapping to occur.
>
> Cheers,
> Botond
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to “ExposureGuidelines” (Intent to *)

2019-07-03 Thread Jeff Gilbert
I think "Intent to prototype" is a good clarity improvement, thanks!

On Wed, Jul 3, 2019 at 2:48 AM Anne van Kesteren  wrote:
>
> Hi all,
>
> In consultation with others I have made some adjustments to
> https://wiki.mozilla.org/ExposureGuidelines.
>
> “Intent to implement” has been renamed to “Intent to prototype” to signify
> more clearly that the change does not affect the shipping product and has
> no direct relation to standardization processes (though continues to be an
> important input for them).
>
> “Intent to ship” encompasses “Intent to implement and ship”. Include the
> fields from “Intent to prototype” if the change does not warrant going
> through two stages.
>
> “Intent to unship” remains as-is.
>
> (Despite some recent emails, “Intent to experiment” is not part of the
> process. Fortunately “Intent to prototype” works well for them.)
>
> I also took this opportunity to revise the page so that it now leads with
> the more significant information. Please do feel free to edit this further
> or message me suggestions.
>
> Kind regards,
>
> Anne
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving reviews to Phabricator

2019-02-06 Thread Jeff Gilbert
FWIW, I have exactly one machine with everything set up, and
git-fetch/push and ssh into it in order to push up or pull down from
moz-central.

On Wed, Feb 6, 2019 at 12:49 PM Jörg Knobloch  wrote:
>
> On 06/02/2019 21:42, Kim Moir wrote:
> > On February 28 Bugzilla review flags will be disabled for Firefox and other
> > mozilla-central products and components. From this point forward, all
> > reviews of code changes to mozilla-central should be conducted in
> > Phabricator. Tasks that have been identified as crucial to this transition
> > have been set as blocker bugs tohttps://bugzil.la/1514775.
>
> Any simplification in sight to get this set up more easily for Windows
> users?
>
> Last I looked you needed 12 steps to get there:
> https://moz-conduit.readthedocs.io/en/latest/arcanist-windows.html
>
> I have three Windows development machines to set up :-(
>
> Jörg.
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Desupport: Fennec Automated Updates (when Sideload Installed)

2019-02-05 Thread Jeff Gilbert
Did we figure out how to tell users they're not going to get updates
anymore? I don't see a resolution in that PR.

This used to be the only way to get Nightly, so I'd expect long time
users to be surprised. I only stopped using the downloaded APK last
year, I think.

On Tue, Feb 5, 2019 at 10:04 AM Justin Wood  wrote:
>
> To follow up here, this is now landed on autoland, once this merges Fennec
> Nightly will no longer find new updates when it was installed outside of
> Google Play.
>
> See linked document from thread for further insight.
>
> ~Justin Wood (Callek)
>
> On Mon, Jan 28, 2019 at 3:36 PM Justin Wood  wrote:
>
> > Hello,
> >
> > I am intending to stop offering Fennec updates when Fennec itself is
> > installed manually instead of via the Google Play store.
> >
> > More details are at https://github.com/mozilla-releng/releng-rfcs/pull/13
> > or in rendered form at
> > https://github.com/Callek/releng-rfcs/blob/no_fennec_balrog/rfcs/0013-disable-fennec-balrog.md
> >
> > Please contain (most) comments to the markdown on github.
> >
> > I intend to implement this on autoland/central as of Feb 5, 2019 and allow
> > it to ride the trains to release.
> >
> > Thank You,
> > ~Justin Wood (Callek)
> > Release Engineer
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-28 Thread Jeff Gilbert
I would much rather revert to:
/*static*/ void
Foo::Bar()

The Foo::Bar is the most relevant part of that whole expression, which
makes it nice to keep up against the start of the line.

In a clang-format world, we should feel more free to make such
deviations from Google Style, since it's all handled for us.

On Mon, Jan 28, 2019 at 10:52 AM Ehsan Akhgari  wrote:
>
> This is indeed one of the cases where the reformat has made things worse.
> I think as a couple of people have already said, we'll find that some
> people do find these annotations useful, even if they're not always
> consistently present.
>
> The path to least resistance for addressing this problem may be to convert
> these into C++-style comments and therefore moving them into their own
> lines.  Would you be OK with that?
>
> Thanks,
> Ehsan
>
> On Fri, Jan 25, 2019 at 11:49 PM Ryan Hunt  wrote:
>
> > Hi all,
> >
> > Quick C++ style question.
> >
> > A common pattern in Gecko is for method definitions to have a comment with
> > the
> > 'static' or 'virtual' qualification.
> >
> > Before the reformat, the comment would be on it's own separate line [1].
> > Now
> > it's on the main line of the definition [2].
> >
> > For example:
> >
> > /* static */ void
> > Foo::Bar() {
> >   ...
> > }
> >
> > vs.
> >
> > /* static */ void Foo::Bar() {
> >   ...
> > }
> >
> > Personally I think this now takes too much horizontal space from the main
> > definition, and would prefer it to be either on its own line or just
> > removed.
> >
> > Does anyone have an opinion on whether we still want these comments? And
> > if so
> > whether it makes sense to move them back to their own line.
> >
> > (My ulterior motive is that sublime text's indexer started failing to find
> >  these definitions after the reformat, but that should be fixed regardless)
> >
> > If you're interested in what removing these would entail, I wrote a regex
> > to
> > make the change [3].
> >
> > Thanks,
> > Ryan
> >
> > [1]
> > https://hg.mozilla.org/mozilla-central/file/0348d472115d/layout/generic/nsFrame.cpp#l1759
> > [2]
> > https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1756
> > [3]
> > https://hg.mozilla.org/try/rev/31ab3e466b6f15dcdbb1aee47edabc7c358b86f2
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
>
>
> --
> Ehsan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling (many) assertions in opt builds locally and eventually Nightly

2018-09-17 Thread Jeff Gilbert
I would be excited for something between MOZ_ASSERT and MOZ_CRASH, but
I think raising MOZ_ASSERTs to MOZ_ASSERT_NIGHTLY should be done by
hand and reviewed.

On Mon, Sep 17, 2018 at 11:46 AM, Kris Maglione  wrote:
> There are some non-trivial snags for this proposal. A lot of assertions rely
> on other code gated on #ifdef DEBUG or DebugOnly<...> to avoid defining data
> members or storing values in non-debug builds. We could replace those with
> more fine-grained macros, of course, but particularly in the case of data
> members, we'd probably also take a significant memory usage regression.
>
> There's also the snag of NS_ASSERTION macros which are... weird.
>
> This is nothing we can't deal with, of course, but it will probably require
> a lot of manual work if we want to do it correctly.
>
>
> On Mon, Sep 17, 2018 at 02:38:14PM -0400, Randell Jesup wrote:
>>
>> There are quite a few things that may be caught by assertions by
>> developers before committing, especially sec issues - but most
>> developers don't run debug builds as their main local test builds, or
>> for local browsing use, because they are almost unusably slow.  And
>> turning on compiler optimization doesn't actually help much here - the
>> problem is mostly debug-only assertions and code that's debug only, such
>> as bug 1441052 ("Don't do full grey-node checking in local debug
>> builds").
>>
>> These added checks may be useful for CI tests, but they make the builds
>> unusable, so the vast majority of assertions don't run in the builds our
>> developers are using.  So enabling most of the existing MOZ_ASSERTs for
>> local opt builds (minus the worst performance-limiting ones) would help
>> developers catch bugs before landing them (and perhaps shipping
>> them). It's a lot like using ASAN builds to browse - but with much less
>> perf degradation on hopes.
>>
>> Even better, if we can make the overall hit to perf low enough (1%?), we
>> could enable it for some/all Nightly users/builds.  Or make "early
>> nightly" (and developer builds) enable them, and late nightly and beta
>> not.
>>
>> If we moved some of the most expensive checks to an alternative macro
>> (MOZ_ASSERT_DEBUG()), we could (selectively?) enable MOZ_ASSERT checks
>> in some opt builds.  Alternatively we could promote some MOZ_ASSERTs to
>> MOZ_ASSERT_NIGHTLY() (or MOZ_DIAGNOSTIC_ASSERT), which would assert in
>> (all) debug builds, and in nightly opt builds - and maybe promote some
>> to MOZ_ASSERT_RELEASE if we can take the perf hit, and they're in an
>> important spot.
>>
>> And as a stepping-stone to the above, having local opt builds enable
>> (most) MOZ_ASSERTs (excluding really expensive ones, like bug 1441052)
>> even if the hit is larger (5, 10%) would also increase the likelihood
>> that we'll catch things before we land them, or before they get to beta.
>> (Note: --enable-release would turn this local-build behavior off - and
>> anyone doing speed tests/profiling/etc needs that already, and probably
>> we should have a specific enable/disable like
>> --disable-extra-assertions).  This wouldn't be all that hard - most of
>> the work would be finding "expensive" assertions like bug 1441052.
>>
>> (and perhaps we should not continue to overload --enable-release for "I
>> want to benchmark/profile this build")
>>
>> Enabling most MOZ_ASSERTs in automation tests on opt builds would also
>> slightly increase our odds of finding problems - though this may not
>> help much, as the same tests are being run in debug builds.  The only
>> advantage would be races may be more likely in the faster opt builds.
>> It might be worth trying once we have this for a few weeks, and see if
>> it helps or not.
>>
>> Note: I've discussed this concept with various people including bz in
>> the past, and mostly have had agreement that this would be useful - thus
>> my filing of bug 1441052.  If we agree this is worth doing, we should
>> file bugs on it and see what the effort to get this would be, and if we
>> could ship some of this to users.  Much like nightly-asan, this would
>> shake out bugs we'll never find from crash-stats reports, and which can
>> lead to critical sec bugs, and turn them into frequently-actionable
>> bugs.  If needed this can be enabled incrementally once the build/config
>> infra and macros are in place; there are several options for that.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: minimum version of clang will become 3.9 soon.

2018-09-10 Thread Jeff Gilbert
I'm always happy for these updates! Thanks!

On Sat, Sep 8, 2018 at 4:19 PM,   wrote:
> This is already practically the case as there are unfixed compilation errors 
> with clang 3.8 (https://bugzilla.mozilla.org/show_bug.cgi?id=1487810).
>
> Bug https://bugzilla.mozilla.org/show_bug.cgi?id=1482196 will fix the build 
> tools so they will require at least clang 3.9 and fail early if not.
>
> I plan to land this pretty soon barring objections.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Workaround for building with Visual Studio 15.7+

2018-08-16 Thread Jeff Gilbert
FWIW, I found that building with clang-cl works fine out-of-the-box
with 15.7.x, and the Installer tells me that I don't have the 15.6
toolset installed. YMMV, but I've found that clang-cl builds Just Work
with 15.7.

On Thu, Aug 16, 2018 at 5:23 AM, Gabriele Svelto  wrote:
>  Hello all,
> being among those unfortunate enough to have updated Visual Studio
> before realizing that the most recent version cannot be used to build
> Firefox, I was faced with the prospect of reinstalling the whole
> monstrosity - which takes forever - or finding a workaround. As it turns
> out I found a clunky yet simple one:
>
> - Launch the Visual Studio Installer, the click on the "modify" button
>
> - Go into the "Individual components" and select the "VC++ 2017 version
> 15.6 v14.13 toolset" component
>
> - Launch the installation
>
> - Modify build/moz.configure/toolchain.configure by manually setting the
> tools_version string in [1] to the value '14.13.26128'
>
> - You're done! Both a clang-cl and a msvc should now pick up the old
> tooling and work properly (provided you have the right Windows SDK).
>
> I couldn't find a way to override this via mozconfig options or
> environment variables but if there is a way to do this without modifying
> the sources I'd like to hear it.
>
>  Gabriele
>
> [1]
> https://searchfox.org/mozilla-central/rev/5dbfd833bbb114afe758db4d4bdbc5b13bcc33ef/build/moz.configure/toolchain.configure#584
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-15 Thread Jeff Gilbert
Awesome!

What are the pros/cons to mozilla::Hash{Map,Set} vs
std::unordered_{map,set}? I'm assuming perf is the main draw? Do we
have a data on this too?

On Mon, Aug 13, 2018 at 10:44 PM, Nicholas Nethercote
 wrote:
> Hi,
>
> I recently moved Spidermonkey's js::Hash{Map,Set} classes from
> js/public/HashTable.h into mfbt/HashTable.h and renamed them as
> mozilla::Hash{Map,Set}. They can now be used throughout Gecko.
>
> (js/public/HashTable.h still exists in order to provide renamings of
> mozilla::Hash{Map,Set} as js::Hash{Map,Set}.)
>
> Why might you use mozilla::Hash{Map,Set} instead of PLDHashTable (or
> nsTHashtable and other descendants)?
>
> - Better API: these types provide proper HashMap and HashSet instances, and
> (in my opinion) are easier to use.
>
> - Speed: the microbenchmark in xpcom/rust/gtest/bench-collections/Bench.cpp
> shows that mozilla::HashSet is 2x or more faster than PLDHashTable. E.g.
> compare "MozHash" against "PLDHash" in this graph:
>
> https://treeherder.mozilla.org/perf.html#/graphs?timerange=604800&series=mozilla-central,1730159,1,6&series=mozilla-central,1730162,1,6&series=mozilla-central,1730164,1,6&series=mozilla-central,1732092,1,6&series=mozilla-central,1730163,1,6&series=mozilla-central,1730160,1,6
>
> Bug 1477627 converted a hot hash table from PLDHashTable to
> mozilla::HashSet and appears to have sped up cycle collection in some cases
> by 7%. If you know of another PLDHashTable that is hot, it might be worth
> converting it to mozilla::HashTable.
>
> Both mozilla::Hash{Map,Set} and PLDHashTable use the same double-hashing
> algorithm; the speed difference is due to mozilla::HashSet's extensive use
> of templating and inlining. The downside of this is that mozilla::HashSet
> will increase binary size more than PLDHashTable.
>
> There are overview comments at the top of mfbt/HashTable.h, and the classes
> themselves have more detailed comments about every method.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Avoid Visual Studio 2017 15.7.0

2018-08-06 Thread Jeff Gilbert
Consider instead building with clang-cl:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Building_Firefox_on_Windows_with_clang-cl

If you build with clang-cl, you can keep the newest
(gecko-incompatible) version of MSVC installed, which is particularly
useful if you build other modern codebases.

On Fri, May 11, 2018 at 3:01 PM, Matthew N.  wrote:
> On 2018-05-08 6:40 a.m., Ryan VanderMeulen wrote:
>>
>> Yesterday, Microsoft released Visual Studio 2017 15.7.0. Unfortunately, it
>> is currently not usable for building Firefox due to bug 1458247 (internal
>> compiler errors in WebRTC code). The bug was already reported and
>> confirmed
>> upstream during the 15.7 preview cycle, but unfortunately the final
>> release
>> still shipped with the bug present.
>>
>> At this point, there are no workarounds available for this issue, so avoid
>> the update if you want to be able to continue building Firefox.
>
>
> If you need to update to 15.6 but hadn't yet, you can download old versions
> at
> https://docs.microsoft.com/en-us/visualstudio/productinfo/installing-an-earlier-release-of-vs2017
>
> - MattN
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Jeff Gilbert
I have filed bug 1477436: "Preferences::Get* is not threadsafe/is main
thread only"
https://bugzilla.mozilla.org/show_bug.cgi?id=1477436

On Fri, Jul 20, 2018 at 11:36 AM, Eric Rahm  wrote:
> We *could* special case prefs with an appropriate data structure that works
> in a thread-safe nature; as far as RWLock's go, we do have one in tree [1].
> This has gone off the rails a bit from Kris' original announcement, which
> I'll reiterate: Watch out for prefs related bustage.
>
> Jeff, would you mind filing a bug for further discussion of off-main-thread
> access as a future improvement?
>
> [1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h
>
> On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione 
> wrote:
>>
>> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>>>
>>> Using a classic read/write exclusive lock, we would only every contend
>>> on read+write or write+write, which are /rare/.
>>
>>
>> That might be true if we gave up on the idea of switching to Robin Hood
>> hashing. But if we don't, then every lookup is a potential write, which
>> means every lookup required a write lock.
>>
>> We also don't really have any good APIs for rwlocks at the moment. Which,
>> admittedly, is a solvable problem.
>>
>>
>>> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
>>> wrote:
>>>>
>>>> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>>>>
>>>>>
>>>>> We should totally be able to afford the very low cost of a
>>>>> rarely-contended lock. What's going on that causes uncached pref reads
>>>>> to show up so hot in profiles? Do we have a list of problematic pref
>>>>> keys?
>>>>
>>>>
>>>>
>>>> So, at the moment, we read about 10,000 preferences at startup in debug
>>>> builds. That number is probably slightly lower in non-debug builds, bug
>>>> we
>>>> don't collect stats there. We're working on reducing that number (which
>>>> is
>>>> why we collect statistics in the first place), but for now, it's still
>>>> quite
>>>> high.
>>>>
>>>>
>>>> As for the cost of locks... On my machine, in a tight loop, the cost of
>>>> a
>>>> entering and exiting MutexAutoLock is about 37ns. This is pretty close
>>>> to
>>>> ideal circumstances, on a single core of a very fast CPU, with very fast
>>>> RAM, everything cached, and no contention. If we could extrapolate that
>>>> to
>>>> normal usage, it would be about a third of a ms of additional overhead
>>>> for
>>>> startup. I've fought hard enough for 1ms startup time improvements, but
>>>> *shrug*, if it were that simple, it might be acceptable.
>>>>
>>>> But I have no reason to think the lock would be rarely contended. We
>>>> read
>>>> preferences *a lot*, and if we allowed access from background threads, I
>>>> have no doubt that we would start reading them a lot from background
>>>> threads
>>>> in addition to reading them a lot from the main thread.
>>>>
>>>> And that would mean, in addition to lock contention, cache contention
>>>> and
>>>> potentially even NUMA issues. Those last two apply to atomic var caches
>>>> too,
>>>> but at least they generally apply only to the specific var caches being
>>>> accessed off-thread, rather than pref look-ups in general.
>>>>
>>>>
>>>> Maybe we could get away with it at first, as long as off-thread usage
>>>> remains low. But long term, I think it would be a performance foot-gun.
>>>> And,
>>>> paradoxically, the less foot-gunny it is, the less useful it probably
>>>> is,
>>>> too. If we're only using it off-thread in a few places, and don't have
>>>> to
>>>> worry about contention, why are we bothering with locking and off-thread
>>>> access in the first place?
>>>>
>>>>
>>>>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>&g

Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Jeff Gilbert
Using a classic read/write exclusive lock, we would only every contend
on read+write or write+write, which are /rare/.

It's really, really nice when we can have dead-simple threadsafe APIs,
instead of requiring people to jump through hoops or roll their own
dispatch code. (fragile) IIRC most new APIs added to the web are
supposed to be available in Workers, so the need for reading prefs
off-main-thread is only set to grow.

I don't see how this can mutate into a foot-gun in ways that aren't
already the case today without off-main-thread access.

Anyway, I appreciate the work that's been done and is ongoing here. As
you burn down the pref accesses in start-up, please consider
unblocking this feature request. (Personally I'd just eat the 400us in
exchange for this simplifying architectural win)

On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione  wrote:
> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>
>> We should totally be able to afford the very low cost of a
>> rarely-contended lock. What's going on that causes uncached pref reads
>> to show up so hot in profiles? Do we have a list of problematic pref
>> keys?
>
>
> So, at the moment, we read about 10,000 preferences at startup in debug
> builds. That number is probably slightly lower in non-debug builds, bug we
> don't collect stats there. We're working on reducing that number (which is
> why we collect statistics in the first place), but for now, it's still quite
> high.
>
>
> As for the cost of locks... On my machine, in a tight loop, the cost of a
> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
> ideal circumstances, on a single core of a very fast CPU, with very fast
> RAM, everything cached, and no contention. If we could extrapolate that to
> normal usage, it would be about a third of a ms of additional overhead for
> startup. I've fought hard enough for 1ms startup time improvements, but
> *shrug*, if it were that simple, it might be acceptable.
>
> But I have no reason to think the lock would be rarely contended. We read
> preferences *a lot*, and if we allowed access from background threads, I
> have no doubt that we would start reading them a lot from background threads
> in addition to reading them a lot from the main thread.
>
> And that would mean, in addition to lock contention, cache contention and
> potentially even NUMA issues. Those last two apply to atomic var caches too,
> but at least they generally apply only to the specific var caches being
> accessed off-thread, rather than pref look-ups in general.
>
>
> Maybe we could get away with it at first, as long as off-thread usage
> remains low. But long term, I think it would be a performance foot-gun. And,
> paradoxically, the less foot-gunny it is, the less useful it probably is,
> too. If we're only using it off-thread in a few places, and don't have to
> worry about contention, why are we bothering with locking and off-thread
> access in the first place?
>
>
>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
>> wrote:
>>>
>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>>
>>>>
>>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>>>
>>>>>
>>>>> tl;dr: A major change to the architecture preference service has just
>>>>> landed, so please be on the lookout for regressions.
>>>>>
>>>>> We've been working for the last few weeks on rearchitecting the
>>>>> preference service to work better in our current and future
>>>>> multi-process
>>>>> configurations, and those changes have just landed in bug 1471025.
>>>>
>>>>
>>>>
>>>> Looks like a great step forward!
>>>>
>>>> While we're thinking about the prefs service, is there any possibility
>>>> we
>>>> could enable off-main-thread access to preferences?
>>>
>>>
>>>
>>> I think the chances of that are pretty close to 0, but I'll defer to
>>> Nick.
>>>
>>> We definitely can't afford the locking overhead—preference look-ups
>>> already
>>> show up in profiles without it. And even the current limited exception
>>> that
>>> we grant Stylo while it has the main thread blocked causes problems (bug
>>> 1474789), since it makes it impossible to update statistics for those
>>> reads,
>>> or switch to Robin Hood hashing (which would make our hash tables much
>>> smaller and more efficient, but requires read operations to be able to
>>> mov

Re: C++ standards proposal for a embedding library

2018-07-18 Thread Jeff Gilbert
It feels like the committee is burnt out on trying to solve the
general library problem, but contemplating something massively complex
like this instead doesn't follow, and is an answer to the wrong
question.

Make it easier to integrate libraries and we wouldn't see kludge
proposals like this.

On Wed, Jul 18, 2018 at 9:45 AM, Botond Ballo  wrote:
> Hi everyone,
>
> With the proposal for a standard 2D graphics library now on ice [1],
> members of the C++ standards committee have been investigating
> alternative ways of giving C++ programmers a standard way to write
> graphical and interactive applications, in a way that leverages
> existing standards and imposes a lower workload on the committee.
>
> A recent proposal along these lines is for a standard embedding
> facility called "web_view", inspired by existing embedding APIs like
> Android's WebView:
>
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
>
> As we have some experience in the embedding space here at Mozilla, I
> was wondering if anyone had feedback on this embedding library
> proposal. This is an early-stage proposal, so high-level feedback on
> the design and overall approach is likely to be welcome.
>
> Thanks!
> Botond
>
> [1] 
> https://botondballo.wordpress.com/2018/06/20/trip-report-c-standards-meeting-in-rapperswil-june-2018/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-17 Thread Jeff Gilbert
We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?

On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione  wrote:
> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>
>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>
>>> tl;dr: A major change to the architecture preference service has just
>>> landed, so please be on the lookout for regressions.
>>>
>>> We've been working for the last few weeks on rearchitecting the
>>> preference service to work better in our current and future multi-process
>>> configurations, and those changes have just landed in bug 1471025.
>>
>>
>> Looks like a great step forward!
>>
>> While we're thinking about the prefs service, is there any possibility we
>> could enable off-main-thread access to preferences?
>
>
> I think the chances of that are pretty close to 0, but I'll defer to Nick.
>
> We definitely can't afford the locking overhead—preference look-ups already
> show up in profiles without it. And even the current limited exception that
> we grant Stylo while it has the main thread blocked causes problems (bug
> 1474789), since it makes it impossible to update statistics for those reads,
> or switch to Robin Hood hashing (which would make our hash tables much
> smaller and more efficient, but requires read operations to be able to move
> entries).
>
>> I am aware that in simple cases, this can be achieved via the
>> StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
>> from other threads. But this doesn't help in my use case, where I need
>> another thread to be able to query an extensible set of pref names that are
>> not fully known at compile time.
>>
>> Currently, it looks like to do this, I'll have to iterate over the
>> relevant prefs branch(es) ahead of time (on the main thread) and copy all
>> the entries to some other place that is then available to my worker threads.
>> For my use case, at least, the other threads only need read access;
>> modifying prefs could still be limited to the main thread.
>
>
> That's probably your best option, yeah. Although I will say that those kinds
> of extensible preference sets aren't great for performance or memory usage,
> so switching to some other model might be better.
>
>> Possible? Or would the overhead of locking be too crippling?
>
>
> The latter, I'm afraid.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-06-29 Thread Jeff Gilbert
I don't believe e-prefix adds sufficient value.
Mutable statics *must* always be s-prefixed. If this is not the case I
will gladly bring the codebase into compliance.

On Fri, Jun 29, 2018 at 11:36 AM, smaug  wrote:
> On 06/29/2018 05:58 PM, Boris Zbarsky wrote:
>>
>> On 6/29/18 10:30 AM, Nathan Froyd wrote:
>>>
>>> Given the language-required qualification for
>>> `enum class` and a more Rust-alike syntax, I would feel comfortable
>>> with saying CamelCase `enum class` is the way to go.
>>
>>
>> For what it's worth, I agree.  The point of the "e" prefix is to highlight
>> that you have an enumeration and add some poor-man's namespacing for a
>> potentially large number of common-looking names, and the language-required
>> qualification already handles both of those.
>>
>
> That works if we're consistently naming static variables and such using
> s-prefix etc, so that
> class Foo1
> {
>   static int sBar;
> };
> Foo1::sBar
>
> is clearly different to
> enum class Foo2
> {
>   Bar
> };
>
> Foo2::Bar
>
>
> (personally I'd still prefer using e-prefix always with enums, class or not.
> Foo1::sBar vs Foo2::eBar is easier to distinguish than Foo1::sBar vs
> Foo2::Bar)
>
> -Olli
>
>
>
>
>> -Boris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-06-25 Thread Jeff Gilbert
If we can't agree, it shouldn't be in the style guide.

On Mon, Jun 25, 2018 at 2:17 PM, Peter Saint-Andre  wrote:
> And perhaps good reason for removing it from the style guide? ;-)
>
> On 6/25/18 3:08 PM, Emilio Cobos Álvarez wrote:
>> And Kris pointed out that we already had another huge thread on this:
>>
>>
>> https://groups.google.com/d/msg/mozilla.dev.platform/WAuySoTfq_w/-DggRotpBQAJ
>>
>>
>> Looks like there wasn't agreement on that one... But oh well, don't want
>> to repeat a lot of that discussion.
>>
>> I think the argument for consistency with the other systems language we
>> have in-tree, the fact that it's not predominant (at least for enum
>> classes) even though it is in the coding style, and that there wasn't
>> agreement in the previous thread are good reasons for not enforcing it,
>> but...
>>
>>  -- Emilio
>>
>> On 6/25/18 10:41 PM, Emilio Cobos Álvarez wrote:
>>> Our coding style states that we should use an `e` prefix for enum
>>> variants, that is:
>>>
>>>enum class Foo { eBar, eBaz };
>>>
>>> We're not really consistent about it: looking at layout/, we mostly
>>> use CamelCase, though we do have some prefixed enums. Looking at other
>>> modules, enum classes almost never use it either. DOM bindings also
>>> don't use that prefix.
>>>
>>> I think that with enum classes the usefulness of the prefix is less
>>> justified. Plus removing them would allow us to match the Rust coding
>>> style as well, which is nice IMO.
>>>
>>> Would anybody object to making the prefix non-mandatory, removing that
>>> line from the coding style doc? Maybe only making it non-mandatory for
>>> enum classes?
>>>
>>> Thanks,
>>>
>>>   -- Emilio
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming C++ standards meeting in Rapperswil, Switzerland

2018-06-20 Thread Jeff Gilbert
I agree with the sentiment in the write-up, that ideally beginner 2d
graphics needs are better handled by improving library/packaging
support, not by importing (and speccing??) an ancient library. For
instance in JS, people are more likely to pull in a library (of many)
to help with graphing, despite the availability of a standard 2D
drawing API.

On Wed, Jun 20, 2018 at 1:37 PM, Eric Shepherd (Sheppy)
 wrote:
> Thanks for sharing that overview!
>
> Although I can see why there's a lot of resistance to adding a graphics
> library to the C++ standard, it seems to me like a good idea. Even though,
> yes, there are going to be better and faster libraries out there, it's also
> true that anyone looking to maximize performance is going to be bypassing
> parts of the standard runtime anyway. Even text-based applications often
> bypass the standard I/O library to do their output more directly to the
> console (as I think back to my days of directly blasting characters into
> the text screen's video memory). Math routines are often replaced with
> custom libraries that do things faster, either using better algorithms,
> making use of processor features not directly supported by the standard
> library, or by making accuracy concessions when all you need are "good
> enough" values.
>
> A standard graphics library would serve to make it easier for new
> programers to onboard, would be great for quick tools that graph the
> results of a problem being solved, or even many simple or low-intensity
> applications that simply don't need high-performance graphics.
>
> I do think that parts of the proposal could be broken off into separate
> components. The linear algebra stuff, along with geometry, could be split
> out into a separate proposal, since these have uses beyond 2D graphics.
> This would both help bring other capabilities to light sooner, but would
> lighten up the graphics proposal as well.
>
>
> On Wed, Jun 20, 2018 at 11:29 AM, Botond Ballo  wrote:
>
>> My blog post about what happened at this meeting is now live:
>>
>> https://botondballo.wordpress.com/2018/06/20/trip-report-c-
>> standards-meeting-in-rapperswil-june-2018/
>>
>> Cheers,
>> Botond
>>
>>
>> On Tue, May 22, 2018 at 6:35 PM, Botond Ballo  wrote:
>> > Hi everyone!
>> >
>> > The next meeting of the C++ Standards Committee will be June 4-9 in
>> > Rapperswil, Switzerland.
>> >
>> > This is going to be a pivotal meeting, with go / no-go decisions
>> > expected for several highly-anticipated C++20 features, including a
>> > subset of Modules; Coroutines; Ranges; and "natural syntax" Concepts /
>> > abbreviated function templates. A discussion of whether or not to
>> > continue efforts to standardize 2D Graphics is also scheduled (see
>> > arguments for [1] and against [2]). In addition, work will continue on
>> > various Technical Specifications that are in flight (including,
>> > notably, Reflection), and processing the large influx of new language
>> > and library feature proposals.
>> >
>> > If you're curious about the state of C++ standardization, I encourage
>> > you to check out my blog posts where I summarize each meeting in
>> > detail (most recent one here [3]), and the list of proposals being
>> > considered by the committee (new ones since the last meeting can be
>> > found here [4] and here [5]).
>> >
>> > I will be attending this meeting, hanging out in the Evolution Working
>> > Group (where new language features are discussed at the design level)
>> > as usual. As always, if there's anything you'd like me to find out for
>> > you at the meeting, or any feedback you'd like me to communicate,
>> > please let me know!
>> >
>> > Finally, I encourage you to reach out to me if you're thinking of
>> > submitting a proposal to the committee. I'm always happy to help with
>> > formulating and, if necessary, presenting a proposal.
>> >
>> > Cheers,
>> > Botond
>> >
>> >
>> > [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0988r0.pdf
>> > [2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1062r0.html
>> > [3] https://botondballo.wordpress.com/2018/03/28/trip-report-c-
>> standards-meeting-in-jacksonville-march-2018/
>> > [4] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/#
>> mailing2018-02
>> > [5] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/#
>> mailing2018-04
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
>
> --
>
> Eric Shepherd
> Senior Technical Writer
> Mozilla
> Blog: http://www.bitstampede.com/
> Twitter: http://twitter.com/sheppy
> Check my Availability 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.

Proposal: WebIDL generated bindings classes renamed from dom::FooBinding to dom::binding::Foo

2018-06-07 Thread Jeff Gilbert
We have a name conflict under the current system when trying to use
these two classes from the webgpu sketch webidl:
- WebGPUBuffer
- WebGPUBufferBinding

I'm proposing renaming the generated bindings classes from
dom::FooBinding to dom::binding::Foo. This seems like a reasonable use
of namespaces, and will be immune from naming collisions coming from
webidl.

Also considered was dom::Foo_Binding. (webidl really shouldn't have
underscores, but technically could!) Talking with bz,
dom::bindings::Foo sounds better to us, though.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-29 Thread Jeff Gilbert
I get that, but it reminds me of the reasons people give for "our
website works best in $browser".

There are also other less-obvious benefits where having multiple
backends can illuminate bugs and deviations from standards, as well as
having another set of warnings and static analysis passes. Once we go
monoculture, it's also really hard to go back, since non-portable
dependencies (both deliberate and accidental) start to pile up.

Now we could deliberately retain these advantages by keeping non-clang
compilers at at least tier-2, while leveraging clang where useful.
(looks like not win64 yet perf-wise, anyways) It would be nice to
establish our plan here.

On Tue, May 29, 2018 at 9:10 PM, Anthony Jones  wrote:
> On Wednesday, 30 May 2018 08:48:12 UTC+12, Jeff Gilbert  wrote:
>> It would be sad to see us standardize on a clang monoculture.
>
> It pays not to be sentimental about tools. We get better practicality and 
> productivity by focusing on clang. Using the same compiler across all 
> platforms means that we can do more with less effort.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-29 Thread Jeff Gilbert
It would be sad to see us standardize on a clang monoculture.

On Tue, May 1, 2018 at 7:56 PM, Anthony Jones  wrote:
> You may already know that the Low-Level Tools team support important tools
> and code infrastructure. Lately we’ve also been improving our rustc/clang
> (LLVM) story and I’d like bring everyone up to date.
>
>
> There are a lot of important and interesting things going on:
>
>
> Michael Woerister and Nathan Froyd recently (mid-March) enabled Rust
> incremental compilation for Firefox developers
>
> Michael is experimenting with cross language inlining (rustc/clang) using
> LTO
>
> Preliminary results show compiling LLVM with clang and using LTO on rustc
> improves stylo compilation time by around 15%
>
> LTO requires more work to support Firefox
>
> Nick Nethercote has started speeding up rustc and has already had a number
> of wins
>
> David Major has got Windows clang builds working with green tests
>
> LTO is still buggy (but works well enough to run benchmarks) and PGO won’t
> run on Windows
>
> Win32 - clang with LTO w/o PGO is marginally faster than MSVC with LTO/PGO
>
> Win64 - clang with LTO w/o PGO is ~5-10% slower than MSVC with LTO/PGO
>
> Windows users can build locally with clang
>
> Mike Hommey is tracking improvements and regressions in the Rust compiler
> (performance regression, crash reporting and more crash reporting)
>
> Tom Tromey is continuing to work on first class Rust support in GDB and LLDB
>
>
> Ultimately, I’d like to see us standardise on clang on all platforms because
> it makes Rust/C++ integration better as well as making things simpler for
> developers and build system maintainers. We’ll get more done if we make our
> own lives simpler.
>
>
> I have some specific requests:
>
>
> Let me know if you have specific Firefox cases where Rust is slowing you
> down
>
> Cross language inlining is coming - avoid duplication between Rust and C++
>
> Start doing your developer builds with clang
>
> Anthony
>
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: ping, rel, referrerPolicy, relList, hreflang, type and text properties on SVG elements

2018-04-09 Thread Jeff Gilbert
Do we have a heuristic for when to /not/ include something from HTML in SVG?

More or less, these additions to SVG just strike me as having solid
potential risk (for both spec-interaction and implementation bugs) and
negligible upside. Do we have people asking for this?

Are there privacy concerns from embedding pings (et al) in images? I
suppose we better have been handling CORS requests from within SVGs
properly already, so it should Just Work?

On Mon, Apr 9, 2018 at 6:26 PM, Cameron McCormack  wrote:
> On Tue, Apr 10, 2018, at 7:56 AM, Jeff Gilbert wrote:
>> Can we not put more things into SVG? Making SVG more complicated seems
>> like it should be an anti-goal for the web platform.
>
> I think we should align the features and behavior of HTML  and SVG .
>
>   
>   something
>  
>
>   
>
> I don't see why the two DOM objects for the HTML  and SVG  should 
> behave substantially differently.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: ping, rel, referrerPolicy, relList, hreflang, type and text properties on SVG elements

2018-04-09 Thread Jeff Gilbert
Can we not put more things into SVG? Making SVG more complicated seems
like it should be an anti-goal for the web platform.

On Mon, Apr 9, 2018 at 2:11 PM,   wrote:
> Summary: HTML anchor elements have ping, rel, referrerPolicy, relList, 
> hreflang, type and text properties. SVG anchor elements should support these 
> properties too according to the SVG 2 specification and 
> https://github.com/w3c/svgwg/issues/315.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1451823
>
> Platform coverage: All
> Target release: Firefox 61
> Preference behind which this will be implemented: None
> Is this feature enabled by default in sandboxed iframes: Yes, enabled 
> everywhere
> DevTools bug: None
> Do other browser engines implement this? Microsoft are working on an 
> implementation for Edge. They have created web platform tests for this.
> web-platform-tests: https://github.com/w3c/web-platform-tests/pull/10275
> Secure contexts: n/a
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Building Firefox 61+ with GCC will soon require version GCC 6.1+

2018-04-05 Thread Jeff Gilbert
I have updated our table:
https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code

On Mon, Apr 2, 2018 at 5:30 PM, Jeff Gilbert  wrote:
> Bug 1444274 will bump our minimum GCC version to 6.1. GCC-7 will
> continue to work.
>
> If you build with GCC instead of Clang on Linux, I've been told that
> the system gcc package for Ubuntu 16.04 LTS is gcc-5, so very soon
> you'll need to install a gcc-6 package to continue to build.
>
> With a bump to GCC 6, along with the bump to vs2017, we should be able
> to start relying on c++14 features. This will help out with tracking
> upstream projects which are starting to rely in c++14, as well as
> letting us phase in new tools into our c++ toolbox. (relaxed constexpr
> is a big one)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Building Firefox 61+ with GCC will soon require version GCC 6.1+

2018-04-05 Thread Jeff Gilbert
Bug 1444274 will bump our minimum GCC version to 6.1. GCC-7 will
continue to work.

If you build with GCC instead of Clang on Linux, I've been told that
the system gcc package for Ubuntu 16.04 LTS is gcc-5, so very soon
you'll need to install a gcc-6 package to continue to build.

With a bump to GCC 6, along with the bump to vs2017, we should be able
to start relying on c++14 features. This will help out with tracking
upstream projects which are starting to rely in c++14, as well as
letting us phase in new tools into our c++ toolbox. (relaxed constexpr
is a big one)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CPU core count game!

2018-03-31 Thread Jeff Gilbert
Are these actual "physical cores", or is this "hardware threads"?
There's very big difference between 2 and 2+HT these days.

On Tue, Mar 27, 2018 at 2:02 PM, Mike Conley  wrote:
> Thanks for drawing attention to this, sfink.
>
> This is likely to become more important as we continue to scale up our
> parallelization with content processes and threads.
>
> On 21 March 2018 at 14:54, Steve Fink  wrote:
>
>> Just to drive home a point, let's play a game.
>>
>> First, guesstimate what percentage of our users have systems with 2 or
>> fewer cores.
>>
>> Then visit https://hardware.metrics.mozilla.com/#goto-cpu-and-memory to
>> check your guess.
>>
>> (I didn't say it was a *fun* game.)
>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Blockers for GCC 6?

2018-03-14 Thread Jeff Gilbert
I have a bug to track updating our requirement to GCC 6:
https://bugzilla.mozilla.org/show_bug.cgi?id=1444274

Are there any other dependents that require GCC 4.9 still?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Visual Studio 2017 15.6 now required to build 61+

2018-03-13 Thread Jeff Gilbert
Bumping to GCC6 has a tracking bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1444274
This would give us general c++14 capability.

The only blocker I know of is updating Sixgill's (hazard analysis?)
GCC version: https://bugzilla.mozilla.org/show_bug.cgi?id=1444543
If there are no other blockers, upgrading our GCC required version may
follow relatively quickly.


On Tue, Mar 13, 2018 at 1:34 PM, Jeff Gilbert  wrote:
> The patches have landed. Thanks!
>
> Are we ready to update this page?:
> https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code
>
> On Mon, Mar 12, 2018 at 5:29 PM, Ryan VanderMeulen
>  wrote:
>> While I know I'm tempting fate by sending this out while the patches are
>> still on autoland, I wanted to start giving people a heads-up now that bug
>> 1424281 has been pushed, which will make Visual Studio 2017 15.6 (Update 6)
>> the minimum version required to build Gecko 61+ once it merges to m-c.
>>
>> This change brings a number of improvements over version 15.4, which is
>> what we've been using in automation since Gecko 58, including performance
>> wins and better C++17 support. Release notes for versions 15.5 and 15.6 are
>> linked below with more details:
>> https://docs.microsoft.com/en-us/visualstudio/releasenotes/vs2017-relnotes-v15.5#a-idlibimprov-a-visual-c-improvements
>> https://docs.microsoft.com/en-us/visualstudio/releasenotes/vs2017-relnotes#CPlusPlus
>> https://blogs.msdn.microsoft.com/vcblog/2017/12/19/c17-progress-in-vs-2017-15-5-and-15-6/
>>
>> If you're currently using Visual Studio 2015, you can download the 2017
>> installer from https://www.visualstudio.com/vs/community/. If you already
>> have 2017 installed, you should only need to launch the Visual Studio
>> Installer already on your system and follow the update prompts. Note that
>> the Windows SDK minimum version was also bumped to version 15063 to match
>> what we've been using in automation. It is also installable via the Visual
>> Studio Installer if needed.
>>
>> Enjoy!
>>
>> -Ryan
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Visual Studio 2017 15.6 now required to build 61+

2018-03-13 Thread Jeff Gilbert
The patches have landed. Thanks!

Are we ready to update this page?:
https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code

On Mon, Mar 12, 2018 at 5:29 PM, Ryan VanderMeulen
 wrote:
> While I know I'm tempting fate by sending this out while the patches are
> still on autoland, I wanted to start giving people a heads-up now that bug
> 1424281 has been pushed, which will make Visual Studio 2017 15.6 (Update 6)
> the minimum version required to build Gecko 61+ once it merges to m-c.
>
> This change brings a number of improvements over version 15.4, which is
> what we've been using in automation since Gecko 58, including performance
> wins and better C++17 support. Release notes for versions 15.5 and 15.6 are
> linked below with more details:
> https://docs.microsoft.com/en-us/visualstudio/releasenotes/vs2017-relnotes-v15.5#a-idlibimprov-a-visual-c-improvements
> https://docs.microsoft.com/en-us/visualstudio/releasenotes/vs2017-relnotes#CPlusPlus
> https://blogs.msdn.microsoft.com/vcblog/2017/12/19/c17-progress-in-vs-2017-15-5-and-15-6/
>
> If you're currently using Visual Studio 2015, you can download the 2017
> installer from https://www.visualstudio.com/vs/community/. If you already
> have 2017 installed, you should only need to launch the Visual Studio
> Installer already on your system and follow the update prompts. Note that
> the Windows SDK minimum version was also bumped to version 15063 to match
> what we've been using in automation. It is also installable via the Visual
> Studio Installer if needed.
>
> Enjoy!
>
> -Ryan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jeff Gilbert
It's way cheaper to get build clusters rolling than to get beefy
hardware for every desk.
Distributed compilation or other direct build optimizations also allow
continued use of laptops for most devs, which definitely has value.

On Wed, Jan 17, 2018 at 11:22 AM, Jean-Yves Avenard
 wrote:
>
>
>> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
>>
>> Something simple with the jobserver logic might work here, but I think we
>> want to complete the long-term project of getting a complete dependency
>> graph available before looking at that kind of optimization.
>
> Just get every person needing to work on mac an iMac Pro, and those on 
> Windows/Linux a P710 or better and off we go.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2018-01-04 Thread Jeff Gilbert
I would say it comes down to the module. I'm very comfortable in my
modules using auto, perhaps because our lifetime management is more
clear. To me, the unsafe examples are pretty strawman-y, since it's
very clear that there are at-risk lifetimes involved. (Returning a
bare pointer and relying on it staying valid is bad; const auto& takes
a reference, not a strong reference. If you want a new strong
reference, you obviously can't just use a reference)

`auto` was formalized in c++11, and it is now 2018. C++11 has been
required to compile Firefox since Firefox 25 in mid 2013. The rules
for auto are pretty easy to understand, too. (strips the c/v/r
decorations from the returning type, so that `const int& bar(); auto
foo = bar();` gives `foo` type `int`) In my mind, pushing back on auto
here is effectively a statement that we never expect our reviewing
ability to expand to include any new constructs, because we might
potentially maybe use those new constructs wrong. Yes, we should
always carefully consider new things. No, we shouldn't use every
new-fangled bit of the language. But honestly if you're writing c++
without auto today, it's not modern c++.

If you think auto isn't right for your module, don't accept it. That
much is agreeable, I think. To my knowledge, we've been using it for
years and so far I've only run into one at-all-related issue with it,
which ended up being a miscompilation on gcc<7 that affects all
references (including non-auto) to member scalars.
(https://bugzilla.mozilla.org/show_bug.cgi?id=1329044)

On Thu, Jan 4, 2018 at 11:31 AM, Randell Jesup  wrote:
> [SNIP]
>>> If foo->bar() can mutate the lifetime of foo, of course you must take a 
>>> strong
>>> reference to foo. Nothing about auto or range-for changes this.
>>
>>What auto does is make it really hard to tell whether a strong reference is
>>being taken.
>>
>>> If you don't understand your lifetimes, you get bugs.
>>
>>Fully agreed.  The discussion is about whether auto helps or hinders that
>>understanding.  The answer is that it depends on the surrounding code, from
>>where I sit...
>>
>>-Boris
>
> So, where do we go from here?
>
> It's clear that auto is not as safe as some/many believe it to be; it
> can as Boris (and smaug and Xidorn) say hide lifetime issues and lead to
> non-obvious UAFs and the like.  Some uses are certainly safe, but it
> isn't always obvious looking at a patch (per above), requiring more code
> investigation by the reviewer.  If the resultant type isn't obvious,
> then using it isn't helping comprehension.  When reading the code in the
> future, if the reader has to do non-trivial investigation just to know
> what's going on, it's hurting our work.
>
> If the win is to avoid typing I'd say it's not worth the risk.  Or
> allow it, but only with commentary as to why it's safe, or with rules
> about what sort of types it's allowed with and anything other than those
> requires justification in comments/etc.
>
> --
> Randell Jesup, Mozilla Corp
> remove "news" for personal email
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: overly strict eslint rules

2017-12-27 Thread Jeff Gilbert
It's not just eslint either. We warn in static analysis now when using
`!std::vector::size()` instead of `empty()`. Such over-prescriptive
linting is unnecessary and unproductive.

On Sun, Dec 24, 2017 at 2:07 PM, Masatoshi Kimura  wrote:
>
> On 2017/12/25 6:31, Jonathan Kingston wrote:
>> It has worked for me for quite some time just running ./mach lint
>> filename.js after bootstrap.
>
> I got the following error when I tried it just now:
>
> ---
> $ mach lint browser/base/content/aboutDialog.js
> eslint-plugin-html v2.0.3 should be v4.0.0.
> ESLint is an old version, clobbering node_modules directory
> Clobbering node_modules...
> Installing eslint for mach using
> "d:\mozilla-build\node-v8.1.4-win-x64\npm.cmd install --loglevel=error"...
> npm ERR! code ENOLOCAL
> npm ERR! Could not install from
> "tools\lint\eslint\eslint-plugin-mozilla" as it does not contain a
> package.json file.
>
> npm ERR! A complete log of this run can be found in:
> npm ERR!
> C:\Users\\AppData\Roaming\npm-cache\_logs\2017-12-24T21_55_36_055Z-debug.log
>
> Error installing eslint, aborting.
> Could not find eslint!  We looked at the --binary option, at the ESLINT
> environment variable, and then at your local node_modules path. Please
> Install
> eslint and needed plugins with:
>
> mach eslint --setup
>
> and try again.
> A failure occured in the eslint linter.
> ? 1 problem (0 errors, 0 warnings, 1 failure)
> ---
>
> It is a really frustrating task to run eslint (and keep a working
> environment) on Windows.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-28 Thread Jeff Gilbert
const auto& foo = getFoo();
foo->bar(); // foo is always safe here

Use of auto instead of auto& should be less common. I only use it for:
Casts: `const auto foo = static_cast(bar);`
Or long (but obvious) types I need a value of, usually RefPtrs of long types.

Almost every other auto is `const auto&`, which has quite good and
predictable behavior: It names a temporary and extends its lifetime to
the current scope.


As for ranged-for:
For one, range-based iteration is equivalent to normal iterator
iteration, which is required for traversal of non-linear containers.
For two, modifying a container while doing normal index iteration
still causes buggy behavior, so it can't be a panacea. (particularly
if you hoist the size!)

  for (auto foo : myFoos) {
foo->bar();
  }
This is always safe if decltype(foo) is a strong ref. If foo->bar()
can mutate the lifetime of foo, of course you must take a strong
reference to foo. Nothing about auto or range-for changes this. If
foo->bar() can mutate myFoos, even index iteration is likely to have
bugs.


If you don't understand your lifetimes, you get bugs. Keeping to C++03
doesn't save you. Keeping to c++03 just means you only get the c++03
versions of these bugs. (or in our case, ns/moz-prefixed versions of
these bugs)

If you're reviewing code and can't figure out what the lifetimes are:
R-. I know well that sometimes this is the hardest part of review, but
it's a required and important part of the review, and sometimes the
most important precisely because we can't always use a
sufficiently-restricted set of constructs.

Review is sign-off about "I understand this and it's good (enough)".
If you have particular module-specific issues where
otherwise-acceptable constructs are particularly problematic, sure,
review appropriately. In my modules, these constructs are fine.
Perhaps this is because we have a more-constrained problem space than
say dom/html.

TL;DR:
If you have reason to ban a construct for your module: Cool and normal.
But the bar for banning a construct for all modules must be higher than that.
A mismatch for one module is not sufficient reason to ban it in general.

PS: If any reviewers need a crash course in any of these new concepts,
I'm more than happy to set up office hours at the All-Hands.

On Tue, Nov 28, 2017 at 1:35 PM, smaug  wrote:
> On 11/28/2017 06:33 AM, Boris Zbarsky wrote:
>>
>> On 11/27/17 7:45 PM, Eric Rescorla wrote:
>>>
>>> As for the lifetime question, can you elaborate on the scenario you are
>>> concerned about.
>>
>>
>> Olli may have a different concern, but I'm thinking something like this:
>>
>>   for (auto foo : myFoos) {
>> foo->bar();
>>   }
>>
>
> That was pretty much what I had in mind.
> Though, using auto without range-for, so just
> auto foo = getFoo();
> foo->bar(); // is this safe?
>
>
>
>
>> where bar() can run arbitrary script.  Is "foo" held alive across that
>> call?  Who knows; you have to go read the definition of the iterators on the
>> type of myFoos to find out.
>>
>> One possible answer is that the right solution for this type of issue is
>> the MOZ_CAN_RUN_SCRIPT static analysis annotation on bar(), which will make
>> this code not compile if the type of "foo" is a raw pointer But this
>> annotation is only added to a few functions in our codebase so far, and
>> we'll
>> see how well we manage at adding it to more.  We have a _lot_ of stuff in
>> our codebase that can run random script.  :(
>>
>> -Boris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-27 Thread Jeff Gilbert
ranged-for issues are the same as those for doing manual iteration,
and your complaints about auto are caused by deficient code/design
review.

The further we deviate from standards, the harder it is for
contributors (and not just new ones) to do the right thing. The
default should be to move towards the standard, not cringe back from
it. Sure, prove it out. But we really don't need more moz-specific
constructs. We should choose our deviations carefully.

On Mon, Nov 27, 2017 at 3:24 AM, smaug  wrote:
> On 11/27/2017 01:05 PM, Nicolas B. Pierron wrote:
>>
>> On 11/26/2017 12:45 AM, smaug wrote:
>>>
>>> On 11/24/2017 06:35 PM, Eric Rescorla wrote:

 On Thu, Nov 23, 2017 at 4:00 PM, smaug  wrote:

> On 11/23/2017 11:54 PM, Botond Ballo wrote:
>
>> I think it makes sense to hide a 'new' call in a Make* function when
>> you're writing an abstraction that handles allocation *and*
>> deallocation.
>>
>> So MakeUnique makes sense, because UniquePtr takes ownership of the
>> allocated object, and will deallocate it in the destructor. MakeRefPtr
>> would make sense for the same reason.
>>
> I almost agree with this, but, all these Make* variants hide the
> information that they are allocating,
> and since allocation is slow, it is usually better to know when
> allocation
> happens.
> I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear
> about
> the functionality.
>

 This seems like a reasonable argument in isolation, but I think it's
 more
 important to mirror the standard C++ mechanisms and C++-14 already
 defines
 std::make_unique.
>>>
>>>
>>> Is it? Isn't it more important to write less error prone code and code
>>> where the reader
>>> sees the possible performance issues.
>>> Recent-ish C++ additions have brought quite a few bad concepts which we
>>> shouldn't use
>>> without considering whether they actually fit into our needs.
>>> Something being new and hip and cool doesn't mean it is actually a good
>>> thing.
>>
>>
>> One example, is that the standard has a single std::function type which
>> always allocate, and does not report failures because we disable exceptions.
>>
>> In this particular case I would recommend to have a stack-function wrapper
>> and a heap-function wrapper. By diverging from the standard we could avoid
>> bugs such as refusing stack-references for lambda being wrapped in
>> heap-function (use-after-free/unwind), and adding extra checks that
>> stack-function
>> wrapper can only be fields of stack-only classes.  C++ compilers have no
>> way to understand the code enough to provide similar life-time errors.
>>
>>> (This broken record keeps reminding about the security issues and memory
>>> leaks auto and ranged-for have caused.)
>>
>>
>> Do you have pointers? Is this basically when we have a non-standard
>> implementation of begin/end methods which are doing allocation?
>
>
> ranged-for causes issues when you modify the data structure while iterating.
> This used to be bad in nsTArray case at least, now we just crash safely.
>
>
> And auto hides the type, so reading the code and understanding lifetime
> management becomes harder.
> This has caused security sensitive crashes and leaks. (And there was some
> other type of error too recently, but it escapes my mind now)
>
>
>>
>>>

 As a post-script, given that we now can use C++-14, can we globally
 replace
 the MFBT clones of C++-14 mechanisms with the standard ones?

 -Ekr


>
>
> -Olli
>
>
>
>> But in cases where a library facility is not taking ownership of the
>> object, and thus the user will need to write an explicit 'delete', it
>> makes sense to require that the user also write an explicit 'new', for
>> symmetry.
>>
>> NotNull is a bit of a weird case because it can wrap either a raw
>> pointer or a smart pointer, so it doesn't clearly fit into either
>> category. Perhaps it would make sense for MakeNotNull to only be
>> usable with smart pointers?
>>
>> Cheers,
>> Botond
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>>>
>>
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-23 Thread Jeff Gilbert
I agree that MakeNotNull doesn't sound like it allocates. Reads to me
as Make[Into]NotNull. Context will *usually* make this clear, but why
not NewNotNull? (Honestly I don't like MakeUnique to begin with)

NotNull::New(args...) is another option.

"My first goal was to avoid the unnecessary nullptr-check in the
NotNull constructor, since our new is infallible by default."
Even if we can't communicate to the optimizer that new is infallible
and incur it to omit the branch in this inlined code, it would be
extremely well predicted.
MOZ_UNLIKELY() it and call it good, IMO.

On Thu, Nov 23, 2017 at 12:58 AM,   wrote:
> I'm not sure what the benefits are (the MakeUnique comments only really seem 
> to give aesthetic ones), but they do make it a bit harder when searching 
> through code to see where things are constructed.
>
> I suppose if we always stick to using Make* then (with regex) it wouldn't add 
> too much burden for searching.
>
> On Thursday, 23 November 2017 07:50:27 UTC, gsqu...@mozilla.com  wrote:
>> Should we allow hiding 'new' statements, or keep them as visible as possible?
>>
>>
>> Some context:
>> Recently in bug 1410252 I added a MakeNotNull(args...) function that 
>> does `NotNull(new T(args...))`, in the style of MakeUnique and others. 
>> It also works with RefPtr.
>>
>> My first goal was to avoid the unnecessary nullptr-check in the NotNull 
>> constructor, since our new is infallible by default.
>>
>> And I thought that hiding the naked new statement was a nice side benefit, 
>> as I was under the impression that it was A Good Thing in modern C++. 
>> (Though I think the main reason for that, was to prevent leaks when handling 
>> exceptions in multi-allocation expressions, so it doesn't apply to us here.)
>>
>> Now, a colleague remarked that this was making the memory allocation less 
>> obvious.
>> "What about MakeUnique?" I asked; "I'm used to them now" he retorted. :-P
>>
>>
>> So, what do you all think?
>> - Should I remove MakeNotNull?
>> - Or should we allow/encourage more MakeX functions instead of X(new...)? 
>> I'm thinking MakeRefPtr might be nice.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-13 Thread Jeff Gilbert
As long as we follow PEP 394, I'm super excited. (including on our
mozilla-build windows system, which counts as 'unix-like')

On Sun, Nov 12, 2017 at 9:12 AM, David Burns  wrote:
> I am not saying it should but if we have a requirement for python 3, we are
> also going to have a requirement for py2 to both be available for local
> development.
>
> David
>
> On 11 November 2017 at 14:10, Andrew Halberstadt 
> wrote:
>
>> On Fri, Nov 10, 2017 at 9:44 PM David Burns  wrote:
>>
>>> My only concern about this is how local developer environments are going
>>> to be when it comes to testing. While I am sympathetic to moving to python
>>> 3 we need to make sure that all the test harnesses have been moved over and
>>> this is something that needs a bit of coordination. Luckily a lot of the
>>> mozbase stuff is already moving to python 3 support but that still means we
>>> need to have web servers and the actual test runners moved over too.
>>>
>>> David
>>>
>>
>> For libraries like mozbase, I think the plan will be to support both 2 and
>> 3 at
>> the same time. There are libraries (like 'six') that make this possible.
>> I'd bet
>> there are even parts of the build system that will still need to support
>> both at
>> the same time.
>>
>> With that in mind, I don't think python 3 support for test harnesses needs
>> to
>> block the build system.
>>
>> Andrew
>>
>> On 10 November 2017 at 23:27, Gregory Szorc  wrote:
>>>
 For reasons outlined at https://bugzilla.mozilla.org/
 show_bug.cgi?id=1388447#c7, we would like to make Python 3 a
 requirement to build Firefox sometime in the Firefox 59 development cycle.
 (Firefox 59 will be an ESR release.)

 The requirement will likely be Python 3.5+. Although I would love to
 make that 3.6 if possible so we can fully harness modern features and
 performance.

 I would love to hear feedback - positive or negative - from downstream
 packagers and users of various operating systems and distributions about
 this proposal.

 Please send comments to dev-bui...@lists.mozilla.org or leave them on
 bug 1388447.

 ___
 dev-builds mailing list
 dev-bui...@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-builds

 ___
>>> dev-builds mailing list
>>> dev-bui...@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-builds
>>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Gilbert
If you don't want to get into the weeds on ECC again, please do not
reinitiate discussion. I do not agree that "the additional cost of ECC
is very low compared to the cost of developer time over the two years
that they're expected to use it", but I will restrict my disagreement
to the forked thread that you created. Please repost there.

On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione  wrote:
> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>>
>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>> wrote:
>>>
>>> Hi All,
>>>
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>>
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>
>>
>> What's the advantage of having a "one size fits all" machine? I
>> imagine there's quite a range of uses and preferences for these
>> machines. e.g some people are going to be spending more time waiting
>> for a single core and so would prefer a smaller core count and higher
>> clock, other people want a machine that's as wide as possible. Some
>> people would value performance over correctness and so would likely
>> not want ECC. etc. I've heard a number of horror stories of people
>> ending up with hardware that's not well suited to their tasks just
>> because that was the only hardware on the list.
>
>
> High core count Xeons will divert power from idle cores to increase the
> clock speed of saturated cores during mostly single-threaded workloads.
>
> The advantage of a one-size-fits-all machine is that it means more of us
> have the same hardware configuration, which means fewer of us running into
> independent issues, more of us being able to share software configurations
> that work well, easier purchasing and stocking of upgrades and accessories,
> ... I own a personal high-end Xeon workstation, and if every developer at
> the company had to go through the same teething and configuration troubles
> that I did while breaking it in, we would not be in a good place.
>
> And I don't really want to get into the weeds on ECC again, but the
> performance of load-reduced ECC is quite good, and the additional cost of
> ECC is very low compared to the cost of developer time over the two years
> that they're expected to use it.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Gilbert
Avoid workstation GPUs if you can. At best, they're just a more
expensive consumer GPU. At worst, they may sacrifice performance we
care about in their optimization for CAD and modelling workloads, in
addition to moving us further away from testing what our users use. We
have no need for workstation GPUs, so we should avoid them if we can.

On Mon, Nov 6, 2017 at 10:32 AM, Sophana "Soap" Aik  wrote:
> Hi All,
>
> I'm in the middle of getting another evaluation machine with a 10-core
> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
> speed and performance) but with ECC memory support.
>
> I'm trying to make sure this is a "one size fits all" machine as much as
> possible.
>
> Also there are some AMD Radeon workstation GPU's that look interesting to
> me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
> (5820T) so we can start testing that as well.
>
> Stay tuned...
>
> On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:
>
>> Thank you for including an AMD card among the ones to be tested.
>>
>> - -
>>
>> The Radeon RX 460 mentioned earlier in this thread arrived. There was
>> again enough weirdness that I think it's worth sharing in case it
>> saves time for someone else:
>>
>> Initially, for multiple rounds of booting with different cable
>> configurations, the Lenovo UEFI consistenly displayed nothing if a
>> cable with a powered-on screen was plugged into the DisplayPort
>> connector on the RX 460. To see the boot password prompt or anything
>> else displayed by the Lenovo UEFI, I needed to connect a screen to the
>> DVI port and *not* have a powered-on screen connected to DisplayPort.
>> However, Lenovo UEFI started displaying on a DisplayPort-connected
>> screen (with or without DVI also connected) after one time I had had a
>> powered-on screen connected to DVI and a powered-off screen connected
>> to DisplayPort at the start of the boot and I turned on the
>> DisplayPort screen while the DVI screen was displaying the UEFI
>> password prompt. However, during that same boot, I happened to not to
>> have a keyboard connected, because it was connected via the screen
>> that was powered off, and this caused an UEFI error, so I don't know
>> which of the DisplayPort device powering on during the UEFI phase or
>> UEFI going through an error phase due to missing keyboard jolted it to
>> use the DisplayPort screen properly subsequently. Weird.
>>
>> On the Linux side, the original Ubuntu 16.04 kernel (4.4) supported
>> only a low resolution fallback mode. Rolling the hardware enablement
>> stack forward (to 4.10 series kernel using the incantation given at
>> https://wiki.ubuntu.com/Kernel/LTSEnablementStack ) fixed this and
>> resulted in Firefox reporting WebGL2 and all. The fix for
>> https://bugzilla.kernel.org/show_bug.cgi?id=191281 hasn't propagated
>> to Ubuntu 16.04's latest HWE stack, which looks distressing during
>> boot, but it seems harmless so far.
>>
>> I got the 4 GB model, since it was available at roughly the same price
>> as the 2 GB model. It supports both screens I have available for
>> testing at their full resolution simultaneously (2560x1440 plugged
>> into DisplayPort and 1920x1200 plugged into DVI).
>>
>> The card is significantly larger than the Quadro M2000. It takes the
>> space of two card slots (connects to one, but the heat sink and the
>> dual fans take the space of another slot). The fans don't appear to
>> make an audible difference compared to the Quadro M2000.
>>
>> On Fri, Oct 27, 2017 at 6:19 PM, Sophana "Soap" Aik 
>> wrote:
>> > Thank you Henri for the feedback.
>> >
>> > How about this, we can order some graphics cards and put them in the
>> > evaluation/test machine that is with Greg, to make sure it has good
>> > compatibility.
>> >
>> > We could do:
>> > Nvidia GTX 1060 3GB
>> > AMD Radeon RX570
>> >
>> > These two options will ensure it can drive multi displays.
>> >
>> > Other suggestions welcomed.
>> >
>> > Greg, is that something you think we should do?
>> >
>> > On Thu, Oct 26, 2017 at 11:33 PM, Henri Sivonen 
>> > wrote:
>> >>
>> >> On Fri, Oct 27, 2017 at 4:48 AM, Sophana "Soap" Aik 
>> >> wrote:
>> >> > Hello everyone, great feedback that I will keep in mind and continue
>> to
>> >> > work
>> >> > with our vendors to find the best solution with. One of the cards
>> that I
>> >> > was
>> >> > looking at is fairly cheap and can at least drive multi-displays (even
>> >> > 4K
>> >> > 60hz) was the Nvidia Quadro P600.
>> >>
>> >> Is that GPU known to be well-supported by Nouveau of Ubuntu 16.04
>> vintage?
>> >>
>> >> I don't want to deny a single-GPU multi-monitor setup to anyone for
>> >> whom that's the priority, but considering how much damage the Quadro
>> >> M2000 has done to my productivity (and from what I've heard from other
>> >> people on the DOM team, I gather I'm not the only one who has had
>> >> trouble with it), the four DisplayPort connectors on it look like very
>> >> bad economics.
>> >>
>> >> I 

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Jeff Gilbert
My understanding of current policy is that ECC is not required. (and
not even an option with MacBook Pros) Given the volume of development
that happens unhindered on our developers' many, many non-ECC
machines, I believe the burden of proof-of-burden is on the pro-ECC
argument to show that it's likely to be a worthwhile investment for
our use-cases.

As for evidence for lack of ECC being a non-issue, I call to witness
the vast majority of Firefox development, most applicably that portion
done in the last ten years, and especially all MacOS development
excluding the very few Mac Pros we have.

If we've given developers ECC machines already when non-ECC was an
option, absent a positive request for ECC from the developer, I would
consider this to have been a minor mistake.

On Mon, Nov 6, 2017 at 3:03 PM, Gabriele Svelto  wrote:
> On 06/11/2017 22:44, Jeff Gilbert wrote:
>> Price matters, since every dollar we spend chasing ECC would be a
>> dollar we can't allocate towards perf improvements, hardware refresh
>> rate, or simply more machines for any build clusters we may want.
>
> And every day our developers or IT staff waste chasing apparently random
> issues is a waste of both money and time.
>
>> The paper linked above addresses massive compute clusters, which seems
>> to have limited implications for our use-cases.
>
> The clusters are 6000 and 8500 nodes respectively, quite small by
> today's standards. How many developers do we have? Hundreds for sure, it
> could be a thousand looking at our current headcount so we're in the
> same ballpark.
>
>> Nearly every machine we do development on does not currently use ECC.
>> I don't see why that should change now.
>
> Not true. The current Xeon E5-based ThinkStation P710 available from
> Service Now has ECC memory and so did the previous models in the last
> five years. Having a workstation available w/o ECC would actually be a
> step backwards.
>
>> To me, ECC for desktop compute
>> workloads crosses the line into jumping at shadows, since "restart
>> your machine slightly more often than otherwise" is not onerous.
> Do you have data to prove that this is not an issue?
>
>  Gabriele
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Jeff Gilbert
Price matters, since every dollar we spend chasing ECC would be a
dollar we can't allocate towards perf improvements, hardware refresh
rate, or simply more machines for any build clusters we may want.

The paper linked above addresses massive compute clusters, which seems
to have limited implications for our use-cases.

Nearly every machine we do development on does not currently use ECC.
I don't see why that should change now. To me, ECC for desktop compute
workloads crosses the line into jumping at shadows, since "restart
your machine slightly more often than otherwise" is not onerous.

On Mon, Nov 6, 2017 at 9:19 AM, Gregory Szorc  wrote:
>
>
>> On Nov 6, 2017, at 05:19, Gabriele Svelto  wrote:
>>
>>> On 04/11/2017 01:10, Jeff Gilbert wrote:
>>> Clock speed and core count matter much more than ECC. I wouldn't chase
>>> ECC support for general dev machines.
>>
>> The Xeon-W SKUs I posted in the previous thread all had identical or
>> higher clock speeds than equivalent Core i9 SKUs and ECC support with
>> the sole exception of the i9-7980XE which has slightly higher (100MHz)
>> peak turbo clock than the Xeon W-2195.
>>
>> There is IMHO no performance-related reason to skimp on ECC support
>> especially for machines that will sport a significant amount of memory.
>>
>> Importance of ECC memory is IMHO underestimated mostly because it's not
>> common and thus users do not realize they may be hitting memory errors
>> more frequently than they realize. My main workstation is now 5 years
>> old and has accumulated 24 memory errors; that may not seem much but if
>> it happens at a bad time, or in a bad place, they can ruin your day or
>> permanently corrupt your data.
>>
>> As another example of ECC importance my laptop (obviously) doesn't have
>> ECC support and two years ago had a single bit that went bad in the
>> second DIMM. The issue manifested itself as internal compiler errors
>> while building Fennec. The first time I just pulled again from central
>> thinking it was a fluke, the second I updated the build dependencies
>> which I hadn't done in a while thinking that an old GCC might have been
>> the cause. It was not until the third day with a failure that I realized
>> what was happening. A 2-hours long memory test showed me the second DIMM
>> was bad so I removed it, ordered a new one and went on to check my
>> machine. I had to purge my compilation cache because garbage had
>> accumulated in there, run an hg verify on my repo as well as verifying
>> all the installed packages for errors. Since I didn't have access to my
>> main workstation at the time I had wasted 3 days chasing the issue and
>> my workflow was slowed down by a cold compilation cache and a gimped
>> machine (until I could replace the DIMM).
>>
>> This is not common, but it's not rare either and we now have hundreds of
>> developers within Mozilla so people are going to run into issues that
>> can be easily prevented by having ECC memory.
>>
>> That being said ECC memory also makes machines less susceptible to
>> Rowhammer-like attacks and makes them detectable while they are happening.
>>
>> For a more in-depth reading on the matter I suggest reading "Memory
>> Errors in Modern Systems - The Good, The Bad, and The Ugly" [1] in which
>> the authors analyze memory errors on live systems over two years and
>> argue that SEC-DED ECC (the type of protection you usually get on
>> workstations) is often insufficient and even chipkill ECC (now common on
>> most servers) is not enough to catch all errors happening during real
>> world use.
>>
>> Gabriele
>>
>> [1] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf
>>
>
> The Xeon-W’s are basically the i9’s (both Skylake-X) with support for ECC, 
> more vPRO, and AMT. The Xeon-W’s lack Turbo 3.0 (preferred core). However, 
> Turbo 2.0 apparently reaches the same MHz, so I don’t think it matters much. 
> There are some other differences with regards to PCIe lanes, chipset, etc.
>
> Another big difference is price. The Xeon’s cost a lot more.
>
> For building Firefox, the i9’s and Xeon-W are probably very similar (and is 
> something we should test). It likely comes down to whether you want to pay a 
> premium for ECC and other Xeon-W features. I’m not in a position to answer 
> that.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-03 Thread Jeff Gilbert
Clock speed and core count matter much more than ECC. I wouldn't chase
ECC support for general dev machines.

On Thu, Nov 2, 2017 at 6:46 PM, Gregory Szorc  wrote:
> On Thu, Nov 2, 2017 at 3:43 PM, Nico Grunbaum  wrote:
>
>> For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for building
>> I use icecc to distribute the load (or rather I will be again when bug
>> 1412240[0] is closed).  The i9 series has lower base clocks (2.8 Ghz, and
>> 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 Ghz.  If I were
>> to switch over to an i9 for everything, would I see a notable difference in
>> performance in rr?
>>
>
> Which i7? You should get better CPU efficiency with newer
> microarchitectures. The i9's we're talking about are based on Skylake-X
> which is based on Skylake which are the i7-6XXX models in the consumer
> lines. It isn't enough to compare MHz: you need to also consider
> microarchitectures, memory, and workload.
>
> https://arstechnica.com/gadgets/2017/09/intel-core-i9-7960x-review/2/ has
> some single-threaded benchmarks. The i7-7700K (Kaby Lake) seems to "win"
> for single-threaded performance. But the i9's aren't far behind. Not far
> enough behind to cancel out the benefits of the extra cores IMO.
>
> This is because the i9's are pretty aggressive about using turbo. More
> aggressive than the Xeons. As long as cooling can keep up, the top-end GHz
> is great and you aren't sacrificing that much perf to have more cores on
> die. You can counter by arguing that the consumer-grade i7's can yield more
> speedups via overclocking. But for enterprise uses, having this all built
> into the chip so it "just works" without voiding warranty is a nice trait :)
>
> FWIW, the choice to go with Xeons always bothered me because we had to make
> an explicit clock vs core trade-off. Building Firefox requires both many
> cores for compiling and fast cores for linking. Since the i9's turbo so
> well, we get the best of both worlds. And at a much lower price. Aside from
> the loss of ECC, it is a pretty easy decision to switch.
>
>
>> -Nico
>>
>> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure in
>> libavutil (missing atomic definitions), when building with clang and icecc
>>
>> [1] https://ark.intel.com/products/series/123588/Intel-Core-X-
>> series-Processors
>>
>> On 10/27/17 7:50 PM, Robert O'Callahan wrote:
>>
>>> BTW can someone forward this entire thread to their friends at AMD so AMD
>>> will fix their CPUs to run rr? They're tantalizingly close :-/.
>>>
>>> Rob
>>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: #pragma once?

2017-10-11 Thread Jeff Gilbert
Our insistence on namespace-miming include directories stands in the way again.

On Wed, Oct 11, 2017 at 3:40 AM, Marco Bonardo  wrote:
> See the previous pragma once proposal:
> https://groups.google.com/forum/#!msg/mozilla.dev.platform/PgDjWw3xp8k/PLYQc5xoWmsJ
>
> On Wed, Oct 11, 2017 at 11:45 AM, Emilio Cobos Álvarez  
> wrote:
>> Hi,
>>
>> I'm adding a header to the build, and I'm wondering: Can we use pragma once?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Follow up on clang-format

2017-09-29 Thread Jeff Gilbert
Now instead we will get to try to phrase code in a way that
clang-format will preserve readably?

I should think "it doesn't always produce nice formatting, but it's at
least consistent" sounds like a major warning sign, particularly given
that we definitely have tricky code for which we have made efforts to
maximize readability and maintainability.

Maybe this is more useful for our more poorly manicured modules, but
that it will drag down well-maintained code in favor of a
lowest-common-denominator style is absolutely concerning to me.

On Thu, Sep 28, 2017 at 1:15 AM, Chris Pearce  wrote:
> On Friday, May 23, 2014 at 3:29:48 AM UTC+2, Anthony Jones wrote:
>> Some of you may remember the discussion on clang-format and the `mach
>> clang-format` command. What we have in place right now is very temporary
>> but it is functional enough to give it a try. I have not put the effort
>> into upstreaming my changes. Depending on the feedback I receive I will
>> either:
>>
>> * Finish my existing changes and upstream them
>> * Remove the `mach clang-format` command altogether
>> * Do nothing
>>
>> I have personally found it useful. However I would like to hear from
>> other people who have tried it to help me decide what to do next.
>>
>> Anthony
>
> I use `./mach clang-format` on basically every patch I write. It has 
> eliminated the style nits that get picked up when people review my patches, 
> at least when I remember to run it!
>
> I've considered writing a commit hook to run `./mach clang-format` before 
> every commit, but haven't gotten around to it...
>
> `mach clang-format` is awesome, please don't remove it.
>
> As others have pointed out, it doesn't always produce nice formatting, but 
> it's at least consistent, and enforces the major style guide recommendations.
>
> It's also liberating not having to waste brain power deciding how to format 
> code.
>
>
> cpearce.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: `else for` or `else { for... }`?

2017-08-30 Thread Jeff Gilbert
IMO: Never else-for. (or else-while)

Else-if is a reasonable continuation of concept: "Well it wasn't that,
what if it's this instead?"
Else-for is just shorthand for "well it wasn't that, so let's loop
over something".

Else-if is generally used for chaining, often effectively as an
advanced switch/match statement.

I've never actually seen else-for, but I imagine it would be used for:
if (!foo) {
} else for (i = 0; i < foo->bar; i++) {
}

I don't think this pattern has enough value to be acceptable as
shorthand. BUT, I didn't know it was a thing until now, so I'm
naturally biased against it.

On Wed, Aug 30, 2017 at 2:58 PM,   wrote:
> Let's keep the flames alive!
>
> Should we always put braces after an `else`, with the only exception being 
> another `if`?
> Or should we also have exceptions for the other control structures (while, 
> do, for, switch)?
>
> A.
> if (...) {
>   ...
> } else {
>   for (...) {
> ...
>   }
> }
>
> B.
> if (...) {
>   ...
> } else for (...) {
>   ...
> }
>
> I can see arguments for&against both, so I'm not too sure which one should 
> win. :-)
> WDYT?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-04-20 Thread Jeff Gilbert
Can confirm for Ryzen: FWIW, my stock R7 1800x (RAM @2133Mhz for now
:( ) did a Windows debug clobber in 15:32.40 with the default -j16.
(any directory that showed up as read by MsMpEng in Resource Monitor
excluded for Defender)

I'll give it a run through with Defender disabled, and see if there's
any change.

On Tue, Apr 18, 2017 at 2:17 AM, Marco Bonardo  wrote:
> On Fri, Mar 17, 2017 at 7:20 PM, Ben Kelly  wrote:
>
>> On Fri, Mar 17, 2017 at 1:36 PM, Ted Mielczarek 
>> wrote:
>>
>> With defender
>> disabled the best I can get is 18min.  This is on one of the new lenovo
>> p710 machines with 16 xeon cores.
>>
>
> Just to add one datapoint, I just replaced my desktop with an AMD Ryzen
> 1700x (OC @3.8GHz) and I can clobber in less than 18mins on something that
> may be cheaper than those XEON machines (an OC 1700 can probably do the
> same). I'm currently using -J20, I didn't yet try to bump that up to see
> what's the limit, I'm positive I can get better times out of this machine,
> but I need to find some time for testing.
> I'm using Panda Free, with the SSD containing the sources in its exclusion
> list, didn't do anything to Defender (but I guess it's disabled by having
> another AV in the system).
> You can also disable Defender with a GPO, by setting "Turn off Windows
> Defender" to Enabled.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Gilbert
They're basically out of stock now, but if you can find them, old
refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
have clang builds down to 5.5. As the v2s leave warranty, similar
machines may hit the market again.

I'm interested to find out how the new Ryzen chips do. It should fit
their niche well. I have one at home now, so I'll test when I get a
chance.

On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
 wrote:
> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>
>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>> >
>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>> >
>> > 24s here. So faster link times and significantly faster clobber times. I'm
>> > sold!
>> >
>> > Any motherboard recommendations? If we want developers to use machines
>> > like this, maintaining a current config in ServiceNow would probably
>> > help.
>>
>>
>> Until the ServiceNow catalog is updated...
>>
>> The Lenovo ThinkStation P710 is a good starting point (
>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>> From the default config:
>>
>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>
>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>> (plus/minus a few hundred depending on configuration specific).
>>
>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>> is absurd (costs >$2000 more when fully configured). Lenovo's
>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>> somewhere will sell the same thing for cheaper.
>>
>> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
>> got the
>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>> 512 GB SSD, the price was very reasonable compared to similar
>> configurations at Dell, HP, others.
>>
>> The just-released Broadwell-E processors with 6-10 cores are also nice
>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>> have no links to share. They should be <$2600 fully configured. That's a
>> good price point between the i7-6700K and a dual socket Xeon. Although if
>> you do lots of C++ compiling, you should get the dual socket Xeons (unless
>> you have access to more cores in an office or a remote machine).
>
>  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
>  hdd for $1525 cad so probably just under $1000 usd.  With just that
>  machine I can do a 10 minute linux debug build.  For less than the
>  price of the e3 machine quoted above I can buy 4 of those machines
>  which I expect would produce build times under 5:00.
>
> I believe with 32gb of ram there's enough fs cache disk performance
> doesn't actually matter, but it might be worth investigating moving a
> ssd to that machine at some point.
>
> So I would tend to conclude Xeons are not a great deal unless you really
> need to build for windows a lot before someone gets icecc working there.
>
> Trev
>
>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
>> out of the box. And, yes, you should use Windows 10 as your primary OS
>> because that's what our users mostly use. I run Hyper-V under Windows 10
>> and have at least 1 Linux VM running at all times. With 32 GB in the
>> system, there's plenty of RAM to go around and Linux performance under the
>> VM is excellent. It feels like I'm dual booting without the rebooting part.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please do NOT hand-edit web platform test MANIFEST.json files

2017-03-21 Thread Jeff Gilbert
JSON allows comments if all the JSON processors we use handle comments. :)

On Tue, Mar 21, 2017 at 8:52 AM, James Graham  wrote:
> On 20/03/17 22:15, gsquel...@mozilla.com wrote:
>
>> Sorry if it's a silly suggestion:
>> Could the current tool insert some helpful reminders *everywhere* in the
>> generated file (so it's can't be missed)?
>> E.g., every 2nd line would read: "// PSA: This file is auto-generated by
>> ./mach wpt-manifest-update, please don't edit!" ;-)
>
>
> It is of course possible but it's not trivial since JSON doesn't allow
> comments, so there would have to be some preprocessing magic and so on.
> Probably better to spend the time on a good solution.
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
Let's drop this side-topic from this thread.

On Thu, Feb 16, 2017 at 4:40 PM, Martin Thomson  wrote:
> On Fri, Feb 17, 2017 at 10:39 AM, Robert O'Callahan
>  wrote:
>>> Using clang-format on the entire tree has the massive value of:
>>
>> Also, it's very liberating not having to think about formatting while
>> writing code, knowing it will be automatically fixed up later. This is
>> especially helpful for partially-automated changes such as mass
>> find-and-replace.
>
> NSS recently reached this point.  Formatting is checked in CI and both
> David and roc are right about the benefits.  We even have a pre-commit
> hook now that catches errors early.
>
> It was only mildly disruptive getting there, but worth it.  Firefox
> is, of course, bigger and more diverse, but I don't see why it
> couldn't eventually reach the same point.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
I'll not get into why I disagree with your points about
format-the-world here. Let's leave that for elsewhere.

Here's my take on the example at hand:
https://pastebin.mozilla.org/8979527
In this case, I do like the latter two best. (Old-style and simplification)

On Thu, Feb 16, 2017 at 3:25 PM, L. David Baron  wrote:
> On Thursday 2017-02-16 15:11 -0800, Jeff Gilbert wrote:
>> I would not assume that's necessarily going to happen without
>> contention. Consistency is not a goal in and of itself.
>
> Using clang-format on the entire tree has the massive value of:
>
>  * reviewers not needing to point out whitespace and indentation issues
>
>  * reduced friction for new contributors (being able to use a tool
>instead of repeatedly iterating on review comments)
>
>  * reduced friction for existing contributors working in new areas
>of the codebase
>
> which outweighs indvidual style preferences or historic module
> differences here.
>
> We should try to do something as close as reasonably possible to the
> existing dominant style so that we disrupt annotate/blame the least.
>
>
> As for the proposal to switch to start-of-line && and || -- I
> wouldn't want to see us try to make such a change *before* we
> clang-format the entire tree, because we'll just create an
> inconsistent mess that makes it harder to figure out what the local
> style is.  I'd be slightly more willing to consider it *after* we
> clang-format the whole tree (or simultaneously, which is perhaps
> better for blame since there's only one revision to skip), but then
> we need to balance it against the cost of the increased disruption
> to blame that makes understanding the history of our code harder.
>
> -David
>
>> On Thu, Feb 16, 2017 at 3:04 PM,   wrote:
>> > There's an ongoing effort to use clang-format to automatically format All 
>> > The Codes [1], so I think we should try and make sure we agree (or at 
>> > least settle*) on a coding style when it's going to be enforced 
>> > everywhere. ;-)
>> >
>> > * I'm presenting my own preferences here, but in the end I'm fine with 
>> > complying with whatever we can all agree to live with. Consistence is more 
>> > important than personal preferences.
>> >
>> > g.
>> >
>> > On Friday, February 17, 2017 at 9:57:04 AM UTC+11, Jeff Gilbert wrote:
>> >> I don't visually like ||/&& at start of line, but I can't fault the
>> >> reasoning, so I'm weakly for it.
>> >> I don't think it's important enough to change existing code though.
>> >>
>> >> On Thu, Feb 16, 2017 at 1:47 PM,   wrote:
>> >> > Question of the day:
>> >> > When breaking overlong expressions, should &&/|| go at the end or the 
>> >> > beginning of the line?
>> >> >
>> >> > TL;DR: Coding style says 'end', I&others think we should change it to 
>> >> > 'beginning' for better clarity, and consistency with other operators.
>> >> >
>> >> >
>> >> > Our coding style reads:
>> >> > "Break long conditions after && and || logical connectives. See below 
>> >> > for the rule for other operators." [1]
>> >> > """
>> >> > Overlong expressions not joined by && and || should break so the 
>> >> > operator starts on the second line and starts in the same column as the 
>> >> > start of the expression in the first line. This applies to ?:, binary 
>> >> > arithmetic operators including +, and member-of operators (in 
>> >> > particular the . operator in JavaScript, see the Rationale).
>> >> >
>> >> > Rationale: operator at the front of the continuation line makes for 
>> >> > faster visual scanning, because there is no need to read to end of 
>> >> > line. Also there exists a context-sensitive keyword hazard in 
>> >> > JavaScript; see bug 442099, comment 19, which can be avoided by putting 
>> >> > . at the start of a continuation line in long member expression.
>> >> > """ [2]
>> >> >
>> >> >
>> >> > I initially focused on the rationale, so I thought *all* operators 
>> >> > should go at the front of the line.
>> >> >
>> >> > But it seems I've been living a lie!
>> >> > &&/|| should apparently be at t

Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
If there are so many parens that we require an editor to disambiguate
them, it's too dense. Add whitespace or decompose the expression until
readable.

On Thu, Feb 16, 2017 at 3:15 PM, Mike Hommey  wrote:
> On Fri, Feb 17, 2017 at 12:10:32AM +0100, Jean-Yves Avenard wrote:
>> reading this: (turn on courier)
>>
>>   return ((aCodecMask & VPXDecoder::VP8)
>>   && aMimeType.EqualsLiteral("video/webm; codecs=vp8"))
>>  || ((aCodecMask & VPXDecoder::VP9)
>>  && aMimeType.EqualsLiteral("video/webm; codecs=vp9"))
>>  || ((aCodecMask & VPXDecoder::VP9)
>>  && aMimeType.EqualsLiteral("video/vp9"));
>>
>> than:
>>   return ((aCodecMask & VPXDecoder::VP8) &&
>>   aMimeType.EqualsLiteral("video/webm; codecs=vp8")) ||
>>  ((aCodecMask & VPXDecoder::VP9) &&
>>   aMimeType.EqualsLiteral("video/webm; codecs=vp9")) ||
>>  ((aCodecMask & VPXDecoder::VP9) &&
>>   aMimeType.EqualsLiteral("video/vp9"));
>>
>> where does the || apply, where does the && ?
>> I must recompose the entire expression in my mind to understand what's going
>> on.
>> The previous one require no such mental process.
>
> ... assuming the indentation and use of parens is correct, which it
> might not be, so you still have to look closely.
>
> Which brings an interesting observation: the review tools are not
> helpful in that situation, while a local editor would allow you to know
> which closing parens corresponds to which opening one.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
Both are readable with judicious indentation on wrapping. If you must
recompose the entire expression in your mind, the code is too terse,
and we should consider cleaning it up and making it more readable.
&&/|| position will not make or break this.

On Thu, Feb 16, 2017 at 3:10 PM, Jean-Yves Avenard
 wrote:
> Hi
>
>
> On 16/02/17 23:56, Jeff Gilbert wrote:
>>
>> I don't visually like ||/&& at start of line, but I can't fault the
>> reasoning, so I'm weakly for it.
>> I don't think it's important enough to change existing code though.
>
>
> Disclaimer: I'm now biased about this.
>
> I've been writing C and C++ code for now most of my life. And during that
> time you take strong habits. Not always good habits, but there are there.
>
> Now having operators at the end of the line, including logical ones like &&
> and || is something I've always done, because it's just the way things were
> done.
> Never really thought much about it, you place the operator at the end of the
> line when you're splitting a long logical operations.
>
> but is it the proper things to do?
> Just like Gerald above, I had misread (or more accurately understood
> differently) what the rules were in that document, because there's various
> discrepancies in it.
>
> When you read code, especially during review, the review process is made
> much more easily when you can at a glance understand one particular logical
> calculation.
>
> reading this: (turn on courier)
>
>   return ((aCodecMask & VPXDecoder::VP8)
>   && aMimeType.EqualsLiteral("video/webm; codecs=vp8"))
>  || ((aCodecMask & VPXDecoder::VP9)
>  && aMimeType.EqualsLiteral("video/webm; codecs=vp9"))
>  || ((aCodecMask & VPXDecoder::VP9)
>  && aMimeType.EqualsLiteral("video/vp9"));
>
> than:
>   return ((aCodecMask & VPXDecoder::VP8) &&
>   aMimeType.EqualsLiteral("video/webm; codecs=vp8")) ||
>  ((aCodecMask & VPXDecoder::VP9) &&
>   aMimeType.EqualsLiteral("video/webm; codecs=vp9")) ||
>  ((aCodecMask & VPXDecoder::VP9) &&
>   aMimeType.EqualsLiteral("video/vp9"));
>
> where does the || apply, where does the && ?
> I must recompose the entire expression in my mind to understand what's going
> on.
> The previous one require no such mental process.
>
> If we are talking about new code, the question becomes, can this become the
> rule *and* can we add such rule to clang-format
> seeing that we're in the process of applying clang-format to the entire
> source tree (with exceptions), modifying existing code would become a moot
> issue.
>
> Cheers
> JY
>
>
>> On Thu, Feb 16, 2017 at 1:47 PM,   wrote:
>>>
>>> Question of the day:
>>> When breaking overlong expressions, should &&/|| go at the end or the
>>> beginning of the line?
>>>
>>> TL;DR: Coding style says 'end', I&others think we should change it to
>>> 'beginning' for better clarity, and consistency with other operators.
>>>
>>>
>>> Our coding style reads:
>>> "Break long conditions after && and || logical connectives. See below for
>>> the rule for other operators." [1]
>>> """
>>> Overlong expressions not joined by && and || should break so the operator
>>> starts on the second line and starts in the same column as the start of the
>>> expression in the first line. This applies to ?:, binary arithmetic
>>> operators including +, and member-of operators (in particular the . operator
>>> in JavaScript, see the Rationale).
>>>
>>> Rationale: operator at the front of the continuation line makes for
>>> faster visual scanning, because there is no need to read to end of line.
>>> Also there exists a context-sensitive keyword hazard in JavaScript; see bug
>>> 442099, comment 19, which can be avoided by putting . at the start of a
>>> continuation line in long member expression.
>>> """ [2]
>>>
>>>
>>> I initially focused on the rationale, so I thought *all* operators should
>>> go at the front of the line.
>>>
>>> But it seems I've been living a lie!
>>> &&/|| should apparently be at the end, while other operators (in some
>>> situations) should be at the beginning.
>>>
>>>
>>> Now I personally thin

Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
I would not assume that's necessarily going to happen without
contention. Consistency is not a goal in and of itself.

On Thu, Feb 16, 2017 at 3:04 PM,   wrote:
> There's an ongoing effort to use clang-format to automatically format All The 
> Codes [1], so I think we should try and make sure we agree (or at least 
> settle*) on a coding style when it's going to be enforced everywhere. ;-)
>
> * I'm presenting my own preferences here, but in the end I'm fine with 
> complying with whatever we can all agree to live with. Consistence is more 
> important than personal preferences.
>
> g.
>
> On Friday, February 17, 2017 at 9:57:04 AM UTC+11, Jeff Gilbert wrote:
>> I don't visually like ||/&& at start of line, but I can't fault the
>> reasoning, so I'm weakly for it.
>> I don't think it's important enough to change existing code though.
>>
>> On Thu, Feb 16, 2017 at 1:47 PM,   wrote:
>> > Question of the day:
>> > When breaking overlong expressions, should &&/|| go at the end or the 
>> > beginning of the line?
>> >
>> > TL;DR: Coding style says 'end', I&others think we should change it to 
>> > 'beginning' for better clarity, and consistency with other operators.
>> >
>> >
>> > Our coding style reads:
>> > "Break long conditions after && and || logical connectives. See below for 
>> > the rule for other operators." [1]
>> > """
>> > Overlong expressions not joined by && and || should break so the operator 
>> > starts on the second line and starts in the same column as the start of 
>> > the expression in the first line. This applies to ?:, binary arithmetic 
>> > operators including +, and member-of operators (in particular the . 
>> > operator in JavaScript, see the Rationale).
>> >
>> > Rationale: operator at the front of the continuation line makes for faster 
>> > visual scanning, because there is no need to read to end of line. Also 
>> > there exists a context-sensitive keyword hazard in JavaScript; see bug 
>> > 442099, comment 19, which can be avoided by putting . at the start of a 
>> > continuation line in long member expression.
>> > """ [2]
>> >
>> >
>> > I initially focused on the rationale, so I thought *all* operators should 
>> > go at the front of the line.
>> >
>> > But it seems I've been living a lie!
>> > &&/|| should apparently be at the end, while other operators (in some 
>> > situations) should be at the beginning.
>> >
>> >
>> > Now I personally think this just doesn't make sense:
>> > - Why the distinction between &&/|| and other operators?
>> > - Why would the excellent rationale not apply to &&/||?
>> > - Pedantically, the style talks about 'expression *not* joined by &&/||, 
>> > but what about expression that *are* joined by &&/||? (Undefined Behavior!)
>> >
>> > Based on that, I believe &&/|| should be made consistent with *all* 
>> > operators, and go at the beginning of lines, aligned with the first 
>> > operand above.
>> >
>> > And therefore I would propose the following changes to the coding style:
>> > - Remove the lonely &&/|| sentence at [1].
>> > - Rephrase the first sentence at [2] to something like: "Overlong 
>> > expressions should break so that the operator starts on the following 
>> > line, in the same column as the first operand for that operator. This 
>> > applies to all binary operators, including member-of operators (in 
>> > particular the . operator in JavaScript, see the Rationale), and extends 
>> > to ?: where the 2nd and third operands should be on separate lines and 
>> > start in the same column as the first operand."
>> > - Keep the rationale at [2].
>> >
>> > Also, I think we should add something about where to break expressions 
>> > with operators of differing precedences, something like: "Overlong 
>> > expressions containing operators of differing precedences should first be 
>> > broken at the operator of lowest precedence. E.g.: 'a+b*c' should be split 
>> > at '+' before '*'"
>> >
>> >
>> > A bit more context:
>> > Looking at the history of the coding style page, a certain "Brendan" wrote 
>> > that section in Augus

Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
I don't visually like ||/&& at start of line, but I can't fault the
reasoning, so I'm weakly for it.
I don't think it's important enough to change existing code though.

On Thu, Feb 16, 2017 at 1:47 PM,   wrote:
> Question of the day:
> When breaking overlong expressions, should &&/|| go at the end or the 
> beginning of the line?
>
> TL;DR: Coding style says 'end', I&others think we should change it to 
> 'beginning' for better clarity, and consistency with other operators.
>
>
> Our coding style reads:
> "Break long conditions after && and || logical connectives. See below for the 
> rule for other operators." [1]
> """
> Overlong expressions not joined by && and || should break so the operator 
> starts on the second line and starts in the same column as the start of the 
> expression in the first line. This applies to ?:, binary arithmetic operators 
> including +, and member-of operators (in particular the . operator in 
> JavaScript, see the Rationale).
>
> Rationale: operator at the front of the continuation line makes for faster 
> visual scanning, because there is no need to read to end of line. Also there 
> exists a context-sensitive keyword hazard in JavaScript; see bug 442099, 
> comment 19, which can be avoided by putting . at the start of a continuation 
> line in long member expression.
> """ [2]
>
>
> I initially focused on the rationale, so I thought *all* operators should go 
> at the front of the line.
>
> But it seems I've been living a lie!
> &&/|| should apparently be at the end, while other operators (in some 
> situations) should be at the beginning.
>
>
> Now I personally think this just doesn't make sense:
> - Why the distinction between &&/|| and other operators?
> - Why would the excellent rationale not apply to &&/||?
> - Pedantically, the style talks about 'expression *not* joined by &&/||, but 
> what about expression that *are* joined by &&/||? (Undefined Behavior!)
>
> Based on that, I believe &&/|| should be made consistent with *all* 
> operators, and go at the beginning of lines, aligned with the first operand 
> above.
>
> And therefore I would propose the following changes to the coding style:
> - Remove the lonely &&/|| sentence at [1].
> - Rephrase the first sentence at [2] to something like: "Overlong expressions 
> should break so that the operator starts on the following line, in the same 
> column as the first operand for that operator. This applies to all binary 
> operators, including member-of operators (in particular the . operator in 
> JavaScript, see the Rationale), and extends to ?: where the 2nd and third 
> operands should be on separate lines and start in the same column as the 
> first operand."
> - Keep the rationale at [2].
>
> Also, I think we should add something about where to break expressions with 
> operators of differing precedences, something like: "Overlong expressions 
> containing operators of differing precedences should first be broken at the 
> operator of lowest precedence. E.g.: 'a+b*c' should be split at '+' before 
> '*'"
>
>
> A bit more context:
> Looking at the history of the coding style page, a certain "Brendan" wrote 
> that section in August 2009 [3], shortly after a discussion here [4] that 
> seemed to focus on the dot operator in Javascript. In that discussion, &&/|| 
> appear in examples at the end of lines and nobody talks about that (because 
> it was not the main subject, and/or everybody agreed with it?)
>
> Discuss!
>
>
> [1] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Control_Structures
> [2] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Operators
> [3] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style$compare?locale=en-US&to=7315&from=7314
> [4] 
> https://groups.google.com/d/msg/mozilla.dev.platform/Ji9lxlLCYME/zabUmQI9S-sJ
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing the representation of rectangles in platform code

2017-02-10 Thread Jeff Gilbert
Reducing overflow risk and simplifying intersection both seem worth
it, but godspeed whoever works on this!

On Fri, Feb 10, 2017 at 12:45 PM, Milan Sreckovic
 wrote:
> First step needs to happen completely before the second step does, so I
> guess the danger is that we start and give up before we get to step 2.  I
> don't think that will happen, but it is something we should always think
> about.
>
> Third step - sure, I can see this not getting completed - examining places
> that do xmax = xmin + width instead of just getting the max X, for example.
>
>
> On 09-Feb-17 20:56, Nicholas Nethercote wrote:
>>
>> My concern is that  this will get partially done but never completed, and
>> we'll end up in a situation where we have two styles in place. We have a
>> bad track record with that sort of thing.
>>
>> Nick
>>
>> On Thu, Feb 9, 2017 at 1:45 PM, Jeff Muizelaar  wrote:
>>
>>> It's not very easy to reason about overflow issues with our current
>>> representation. This means that we currently just pretend that they
>>> don't happen. The idea for changing the representation came up in
>>> response to a security bug where we didn't really have a better
>>> solution.
>>>
>>> Changing to x1, x2, y1, y2 will allow us to match the pixman rectangle
>>> representation that we use for regions. The region code shows quite a
>>> bit profiles so even a small improvement in this area is nice to have.
>>>
>>> -Jeff
>>>
>>> On Wed, Feb 8, 2017 at 9:19 PM, David Major  wrote:

 Is there a specific problem that's being solved by this proposal? It
>>>
>>> would

 be helpful to make this a bit more concrete, like "these benchmarks go
 x%
 faster", or "here's a list of overflow bugs that will just vanish", or
 "here's some upcoming work that this would facilitate".

 On Thu, Feb 9, 2017 at 1:56 PM, Botond Ballo  wrote:
>
> Hi everyone!
>
> I would like to propose changing the internal representation of
> rectangles in platform code.
>
> We currently represent a rectangle by storing the coordinates of its
> top-left corner, its width, and its height. I'll refer to this
> representation as "x/y/w/h".
>
> I would like to propose storing instead the coordinates of the
> top-left corner, and the coordinates of the bottom-right corner. I'll
> refer to this representation as "x1/y1/x2/y2".
>
> The x1/y1/x2/y2 representation has several advantages over x/y/w/h:
>
>- Several operations are more efficient with x1/y1/x2/y2, including
> intersection,
>  union, and point-in-rect.
>- The representation is more symmetric, since it stores two
> quantities
> of the
>  same kind (two points) rather than a point and a dimension
> (width/height).
>- The representation is less susceptible to overflow. With x/y/w/h,
> computation
>  of x2/y2 can overflow for a large range of values of x/y and w/h.
> However,
>  with x1/y1/x2/y2, computation of w/h cannot overflow if the
> coordinates are
>  signed and the resulting w/h is unsigned.
>
> A known disadvantage of x1/y1/x2/y2 is that translating the rectangle
> requires translating both points, whereas translating x/y/w/h only
> requires translating one point. I think this disadvantage is minor in
> comparison to the above advantages.
>
> The proposed change would affect the class mozilla::gfx::BaseRect, and
> classes that derive from it (such as CSSRect, LayoutRect, etc., and,
> notably, nsRect and nsIntRect), but NOT other rectangle classes like
> DOMRect.
>
> I would like to propose making the transition as follows:
>
>- Replace direct accesses to the 'width' and 'height' fields
>>>
>>> throughout
>
>  the codebase with calls to getter and setter methods. (There
> aren't
>  that many - on the order of a few dozen, last I checked.)
>
>- Make the representation change, which is non-breaking now that
>  the direct accesses to 'width' and 'height' have been removed.
>
>- Examine remaining calls to the getters and setters for width and
>  height and see if any can be better expressed using operations
>  on the points instead.
>
> The Graphics team, which owns this code, is supportive of this change.
> However, since this is a fundamental utility type that's used by a
> variety of platform code, I would like to ask the wider platform
> development community for feedback before proceeding. Please let me
> know if you have any!
>
> Thanks,
> Botond
>
> [1]
> http://searchfox.org/mozilla-central/rev/672c83ed65da286b68be1d02799c35
>>>
>>> fdd14d0134/gfx/2d/BaseRect.h#46
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-pla

Re: Intent to deprecate: Building mozilla-central without Skia enabled

2016-08-23 Thread Jeff Gilbert
Agreed.

On Mon, Aug 22, 2016 at 11:08 AM, George Wright  wrote:
> I'm looking to remove the option to build mozilla-central without Skia.
>
> Currently we default to Skia being disabled, and enable it using
> --enable-skia. This is problematic because all of our officially supported
> build configurations enable Skia, and as such the Skia-is-disabled build has
> been buggy for quite some time. This has manifested most recently in bug
> 1296524.
>
> In the past year, Skia has moved from being an optional part of our graphics
> code to being a core, indispensable module. We are actively pursuing making
> it the default backend in almost all canvas and content configurations; it
> is already the default software canvas option on all platforms and will soon
> be for all software-backed content as well.
>
> To this end, I'd like to remove support for building mozilla-central without
> Skia as I think it should now be considered an integral part of our code.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-06-01 Thread Jeff Gilbert
Awesome, this sounds like what I was after. (though actual components
isn't really necessary. If that part is a pain point, I would prefer
to have a tool without it, than to have no tool)

On Wed, Jun 1, 2016 at 12:23 PM, Ted Mielczarek  wrote:
> On Tue, May 31, 2016, at 09:26 PM, Jeff Gilbert wrote:
>> On Tue, May 31, 2016 at 4:39 PM, Nicholas Nethercote
>>  wrote:
>> > On Wed, Jun 1, 2016 at 1:05 AM, Benjamin Smedberg  
>> > wrote:
>> >> You shouldn't need to annotate the file/line separately, because that is
>> >> (or at least should be!) the top of the stack.
>> >
>> > Yes. Don't get hung up on the lack of annotations. It isn't much of a
>> > problem; you can click through easily enough. I have filed bug 1277104
>> > to fix the handful of instances that are showing up in practice, but
>> > it'll only be a minor improvement.
>>
>> Perhaps this isn't meant for me then? I looked at the query from the
>> first post, but it's just noise to me. If it included the file that it
>> crashed from, it would suddenly be very useful, since it'd then be
>> trivial to see if there's something relevant to me.
>>
>> As it stands now, the query alone doesn't seem useful to me. If it's
>> meant to be useful to developers who write MOZ_CRASHes, this is a
>> problem. If not, please ignore!
>>
>> I would be extremely interested in MOZ_CRASHes and friends
>> automatically getting bugs filed and needinfo'd. An index of
>> crashes-by-file would get half-way there for me.
>
> I believe at some point in the past we talked about trying to do a "top
> crashes by bug component" view, but maintaining that mapping was hard.
> It turns out that we're storing this data in moz.build files  nowadays
> (for example[1]), and we even have a web service on hg.mozilla.org to
> expose it for any given revision[2]. Unfortunately that web service is
> currently broken[3], but gps tells me he's just been delaying fixing it
> because there weren't any consumers complaining.
>
> When the source file from the last frame of the stack used to generate
> the signature points to hg.mozilla.org we could query that web service
> to get a bug component for the file in question and put that in the
> crash report, and expose it to queries. That would make it easy to get
> lists of crashes by component, which I think would do what people here
> are asking for. I filed bug 1277337 to track this idea.
>
> -Ted
>
> 1.
> https://dxr.mozilla.org/mozilla-central/rev/4d63dde701b47b8661ab7990f197b6b60e543839/dom/media/moz.build#7
> 2.
> http://mozilla-version-control-tools.readthedocs.io/en/latest/hgmo/mozbuildinfo.html
> 3. https://bugzilla.mozilla.org/show_bug.cgi?id=1263973
> 4. https://bugzilla.mozilla.org/show_bug.cgi?id=1277337
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-06-01 Thread Jeff Gilbert
It would be useful to have a dashboard that collates this information better.

PS: Sarcasm is unhelpful.

On Tue, May 31, 2016 at 7:14 PM, Nicholas Nethercote
 wrote:
> On Wed, Jun 1, 2016 at 11:26 AM, Jeff Gilbert  wrote:
>>
>> Perhaps this isn't meant for me then? I looked at the query from the
>> first post, but it's just noise to me. If it included the file that it
>> crashed from, it would suddenly be very useful, since it'd then be
>> trivial to see if there's something relevant to me.
>
> Let's look at the top #3:
>
>
> 1  MOZ_CRASH(Shutdown too long, probably frozen, causing a crash.)
> 129715  9.92 %
>
> If you use your favourite source code search tool to look for
> "Shutdown too long", you'll find that this crash is occurring in
> toolkit/components/terminator/nsTerminator.cpp. For example, here's a
> DXR link:
>
> https://dxr.mozilla.org/mozilla-central/search?q=%22Shutdown+too+long%22&redirect=false
>
> The line in question looks like this:
>
>  MOZ_CRASH("Shutdown too long, probably frozen, causing a crash.");
>
>
> 2  MOZ_CRASH() 25987   1.99 %
>
> This one matches all calls to MOZ_CRASH() that don't provide a string
> parameter. Digging into these ones is slightly harder, requiring a new
> search for bugs that have "moz crash reason" set to "MOZ_CRASH()":
>
> https://crash-stats.mozilla.com/search/?product=Firefox&moz_crash_reason=%3DMOZ_CRASH%28%29&_facets=signature&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform&_columns=moz_crash_reason#facet-signature
>
>
> 3  MOZ_CRASH(GFX: Unable to get a working D3D9 Compositor) 21040.16 %
>
> Searching for "working D3D9 Compositor" identifies this one as coming
> from gfx/layers/d3d9/CompositorD3D9.cpp.
>
>
> And so on. Searching for strings in code is a useful technique in many
> situations, I recommend it!
>
>
> BTW, thank you to all those who have already looked through the list
> and mentioned existing bugs and/or filed new bugs.
>
> Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-05-31 Thread Jeff Gilbert
On Tue, May 31, 2016 at 4:39 PM, Nicholas Nethercote
 wrote:
> On Wed, Jun 1, 2016 at 1:05 AM, Benjamin Smedberg  
> wrote:
>> You shouldn't need to annotate the file/line separately, because that is
>> (or at least should be!) the top of the stack.
>
> Yes. Don't get hung up on the lack of annotations. It isn't much of a
> problem; you can click through easily enough. I have filed bug 1277104
> to fix the handful of instances that are showing up in practice, but
> it'll only be a minor improvement.

Perhaps this isn't meant for me then? I looked at the query from the
first post, but it's just noise to me. If it included the file that it
crashed from, it would suddenly be very useful, since it'd then be
trivial to see if there's something relevant to me.

As it stands now, the query alone doesn't seem useful to me. If it's
meant to be useful to developers who write MOZ_CRASHes, this is a
problem. If not, please ignore!

I would be extremely interested in MOZ_CRASHes and friends
automatically getting bugs filed and needinfo'd. An index of
crashes-by-file would get half-way there for me.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-05-31 Thread Jeff Gilbert
On Tue, May 31, 2016 at 6:18 AM, Gabriele Svelto  wrote:
> On 31/05/2016 13:26, Gijs Kruitbosch wrote:
>> We could do a find/replace of no-arg calls to a new macro that uses
>> MOZ_CRASH with a boilerplate message, and make the argument non-optional
>> for new uses of MOZ_CRASH? That would avoid the problem for new
>> MOZ_CRASH() additions, which seems like it would be wise so the problem
>> doesn't get worse? Or is it not worth even that?
>
> What about adding file/line number information? This way one could
> always tell where it's coming from even if it doesn't have a descriptive
> string.

Agreed! These queries are much more useful if they have file names.
Line numbers are a plus, but I agree that since these drift, they are
not useful for collating. File names will generally not drift, and
would make these queries much easier to grep for problems originating
from code we're responsible for.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Generating Visual Studio project files by default

2016-05-24 Thread Jeff Gilbert
What's the build-time impact of this?

On Tue, May 24, 2016 at 4:00 PM, Gregory Szorc  wrote:
> Coming soon to your local builds, Visual Studio project files will be
> generated automatically when building on Windows because we want to
> encourage more people to use them because fully-featured IDEs can be
> productivity wins.
>
> The Visual Studio projects now automatically target the Visual Studio
> version being used (1 week ago they would always generate VS2013 project
> files). Another new change is that unless the build configuration changes,
> the Visual Studio files won't be updated by the build system unless they
> need changed. That means if you do a pull+build when Visual Studio is open,
> you shouldn't get tons of warnings that you need to reload files to pick up
> changes.
>
> The Visual Studio integration isn't perfect. We'd like to encourage more
> development on Windows because that's where most Firefox users are. So if
> you'd like improvements to the Visual Studio project files, please file
> Core :: Build Config bugs.
>
> This change was tracked in bug 1275297. Bug 1275419 tracks a follow-up to
> allow disabling their generation.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Out parameters, References vs. Pointers (was: Proposal: use nsresult& outparams in constructors to represent failure)

2016-04-21 Thread Jeff Gilbert
Pointers are prefereable for outparams because it makes it clearer
what's going on at the callsite. (at least indicating that something
non-trivial is happening)

On Wed, Apr 20, 2016 at 8:07 PM, Kan-Ru Chen (陳侃如)  wrote:
> Nicholas Nethercote  writes:
>
>> Hi,
>>
>> C++ constructors can't be made fallible without using exceptions. As a 
>> result,
>> for many classes we have a constructor and a fallible Init() method which 
>> must
>> be called immediately after construction.
>>
>> Except... there is one way to make constructors fallible: use an |nsresult&
>> aRv| outparam to communicate possible failure. I propose that we start doing
>> this.
>
> Current coding style guidelines suggest that out parameters should use
> pointers instead of references. The suggested |nsresult&| will be
> consistent with |ErrorResult&| usage from DOM but against many other out
> parameters, especially XPCOM code.
>
> Should we special case that nsresult and ErrorResult as output
> parameters should always use references, or make it also the default
> style for out parameters?
>
> I think this topic has been discussed before didn't reach a
> consensus. Based the recent effort to make the code using somewhat
> consistent style, should we expend on this on the wiki?
>
>Kanru
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-21 Thread Jeff Gilbert
FWIW, I use static Create functions for fallible heap-only objects,
and StackFallible::ctor(bool* const out_success/error) for the rarer
fallible stack-createable objects.

It sounds like this lines up with existing proposals here.

Example fallible heap-only:
https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/gfx/gl/GLScreenBuffer.cpp#46
https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/gfx/gl/SharedSurfaceEGL.cpp#19

Example fallible stack-createable:
https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/dom/canvas/WebGLContextDraw.cpp#254
(Definition is at the top of the file)

Generally my preference for static Create functions is that they don't
allocate the actual class until they are infallible, and thus they are
never in partially-valid states. The ctor is almost always trivial, so
as to keep all the logic centralized in the Create function.

Static Create funcs with thin ctors also allow you to use const to a
greater extent, which can be really nice. (many getters can become
direct access to public const members)


Shoehorning is not my favorite. I'd much rather get the minor
refactoring done to modernize the cruft, rather than limp it along.

On Thu, Apr 21, 2016 at 8:36 AM, Nick Fitzgerald
 wrote:
> On Thu, Apr 21, 2016 at 3:24 AM, Nicholas Nethercote > wrote:
>
>> On Thu, Apr 21, 2016 at 7:38 PM, Nicolas Silva 
>> wrote:
>> > Fallible construction (even with a way to report failure) is annoying if
>> > only because the object's destructor has to account for the possible
>> > invalid states. I much prefer having a static creation method that will
>> > only instantiate the object in case of success, and mark the constructor
>> > protected. Something like:
>> >
>> > static
>> > already_AddRefed Foo::Create(SomeParams...) {
>> > // logic that may fail...
>> > if (failed) {
>> >  return nullptr;
>> > }
>> > return MakeAndAddRef(stuffThatDidNotFail);
>> > }
>>
>> So instead of doing the infallible part in the constructor and the
>> fallible part in the Init() function, you're suggesting doing the
>> fallible part first and only calling the constructor once it succeeds?
>> Interesting. I can see it would work nicely for some heap-allocated
>> objects. But I see two possible problems:
>>
>> - It doesn't appear to work with stack-allocated objects?
>>
>
> I think `mozilla::Maybe` works well, as mentioned elsewhere in the
> thread. Here is an example of a non-boxed class whose ctor is infallible
> and has a factory method returning a `Maybe` where
> `None` indicates failure:
> https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/js/public/UbiNodeDominatorTree.h#207-274
>
> In general I agree with Nicolas: use patterns that make half-constructed
> things impossible and avoid the whole mess.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style for C++ enums

2016-04-11 Thread Jeff Gilbert
On Mon, Apr 11, 2016 at 4:00 PM, Bobby Holley  wrote:
> On Mon, Apr 11, 2016 at 2:12 PM, Jeff Gilbert  wrote:
>> I propose that if you're in a part of the codebase where you can't
>> tell if it's an enum or a function pointer (regardless of the fact
>> that the compiler can't confuse these), you should do more looking
>> around before working on it, much less reviewing changes to the code.
>> If you can't tell if it's an enum, you're not going to be able to
>> effectively make changes, so buckle up and do some looking around.
>
>
> I strongly disagree with this statement and the sentiment behind it. The
> ability for certain people to be able to quickly review changes in code that
> they don't have paged into working memory is vital to the survival of Gecko.

Well I disagree in the long term. If we are going to survive, we
*must* minimize such requirements. Skimping on code review should be
low on our list of options, given its propensity to create bugs and
accrue technical debt in the name of moving fast in the short term.

It's a tough job. I definitely understand. But good review is a
necessity. We have a number of problems that are "harsh realities",
but I hope we're trying to fix what we can, not optimizing assuming
failure.

> Mozilla is very bad at distributing review loads. There are literally a
> handful of people holding the ship together (and smaug is one of them). It
> is not a fun job, and the bus-factor is frighteningly low. This means that
> we should absolutely give more weight to the preferences of top reviewers
> because:
> (a) We owe it to them, and
> (b) We want to make it easier for other people to join their ranks.

Why do we have to be bad at distributing review loads? Is it a tooling
problem? And on-boarding problem? Why can't we reduce our need and/or
spread the load?

It's not pleasant to try to discuss these things only to be told that
I don't review enough code to matter. A bunch of us review code, and
yes, some more than others. I don't know where to find actual data,
but there is a panel helping with MozReview feedback whose members
were selected from 20 people chosen by virtue of being "top reviewers
by volume", a group which to which smaug, I, and at least 18 others
belong.

There are people (including smaug) who review more code than me. Their
opinions *do* have more weight than mine. However, I hope I my
arguments are still heard and granted what weight they are entitled
to, particularly since many people I talk with in person are strongly
opposed to changes here and elsewhere, but reluctant to join these
often-frustrating discussions.

It's always better when we have discussions based on data rather than
opinions, but when we're down to opinions, it's really hard to judge
support via email. There's no good "+1" option without  spamming, and
many people just don't want to deal with the stress that merely
following these discussions can incur.

>> To go completely off the rails, I no longer even believe that all of
>> Gecko should have a single style.
>
>
> Yes, that is off the rails in this context.
>
>>
>> I think the whole attempt is
>> increasingly a distraction vs the alternative, and I say this as
>> someone who writes and reviews under at least three differently-styled
>> code areas. The JS engine will, as ever, remain distinct
>
>
> Jason agreed to converge SM style with Gecko.

The SM engineers I talked to are dismissive of this. I will solicit responses.

>> , as will
>> third-party code, and also all non-actively maintained files. (most of
>> our codebase)
>>
>> Yes, a formatter could magic-wand us to most of the superficial
>> aspects of a single code style, but we'd then need to backfill
>> readability into some of the trickier bits. Sometimes we need to step
>> outside the prevailing style to keep something reasonably readable.
>>
>> But we /already have/ a process for keeping things readable: Code reviews.
>> And if that fails: Module ownership.
>>
>> I'm not sure why we hold centralization in such high regard when it
>> comes to code style. And, if we are continuing to centralize this,
>> we're going to need to have an actual process for making decisions
>> here.
>>
>> I suspect the goal is to have an oracle we can ask how we should
>> format code in order to be readable. However, we can't even agree on
>> it, so the right answer often is indeterminate. However, people within
>> their modules have a better idea of what works in each case.
>>
>> I do think we should have some high-level style guidance, but I think
>> we're increasi

Re: Coding style for C++ enums

2016-04-11 Thread Jeff Gilbert
I'm not sure how this is compromise. We were already supposed to use
enum classes with new code.

Every additional glyph incurs mental load. Code should instead be
written so these things are not necessary to explicitly state. *That*
is the code we should be trying to write. In well-written code,
superfluous warts detract from readability.

At this point, it feels a lot like I could propose full hungarian
notation to this list piecemeal, and there would be ongoing
'compromises' to add it one step at a time. After all, who wants to
trace back to the variable declaration when the variable name can tell
us what type it is?


I propose that if you're in a part of the codebase where you can't
tell if it's an enum or a function pointer (regardless of the fact
that the compiler can't confuse these), you should do more looking
around before working on it, much less reviewing changes to the code.
If you can't tell if it's an enum, you're not going to be able to
effectively make changes, so buckle up and do some looking around.


To go completely off the rails, I no longer even believe that all of
Gecko should have a single style. I think the whole attempt is
increasingly a distraction vs the alternative, and I say this as
someone who writes and reviews under at least three differently-styled
code areas. The JS engine will, as ever, remain distinct, as will
third-party code, and also all non-actively maintained files. (most of
our codebase)

Yes, a formatter could magic-wand us to most of the superficial
aspects of a single code style, but we'd then need to backfill
readability into some of the trickier bits. Sometimes we need to step
outside the prevailing style to keep something reasonably readable.

But we /already have/ a process for keeping things readable: Code reviews.
And if that fails: Module ownership.

I'm not sure why we hold centralization in such high regard when it
comes to code style. And, if we are continuing to centralize this,
we're going to need to have an actual process for making decisions
here.

I suspect the goal is to have an oracle we can ask how we should
format code in order to be readable. However, we can't even agree on
it, so the right answer often is indeterminate. However, people within
their modules have a better idea of what works in each case.

I do think we should have some high-level style guidance, but I think
we're increasingly micromanaging style. Let code review take care of
the specifics here. If it's not readable without prefixes, definitely
consider adding prefixes. Otherwise don't require it.


FWIW, I do have a preference for k prefix instead of e prefix, (if we
must have a required prefix) since they are in the same 'constants'
category. 'k' I can be persuaded the usefulness of for enums. 'e', not
so much.


On Mon, Apr 11, 2016 at 8:00 AM, Kartikaya Gupta  wrote:
> Based on all the feedback so far I think the best compromise is to use
> enum classes with the "e" prefix on the values. As was mentioned, the
> "e" prefix is useful to distinguish between enum values and function
> pointer passing at call sites, but is a small enough prefix that it
> shouldn't be too much of a burden.
>
> I realize not everybody is going to be happy with this but I don't
> want this discussion to drag on to the point where it's a net loss of
> productivity no matter what we end up doing. I'll update the style
> guide.
>
> Thanks all!
>
> kats
>
>
> On Sun, Apr 10, 2016 at 11:43 AM, smaug  wrote:
>> On 04/10/2016 05:50 PM, Aryeh Gregor wrote:
>>>
>>> On Fri, Apr 8, 2016 at 8:35 PM, smaug  wrote:

 enum classes are great, but I'd still use prefix for the values.
 Otherwise the values look like types - nested classes or such.
 (Keeping my reviewer's hat on, so thinking about readability here.)
>>>
>>>
>>> In some cases I think it's extremely clear, like this:
>>>
>>>enum class Change { minus, plus };
>>>
>>> whose sole use is as a function parameter (to avoid a boolean
>>> parameter), like so:
>>>
>>>ChangeIndentation(*curNode->AsElement(), Change::plus);
>>>
>>> In cases where it might be mistaken for a class, it might make more
>>> sense to prefix the enum class name with "E".  I don't think this
>>> should be required as a general policy, though, because for some enums
>>> it might be clear enough without it.
>>>
>>
>> Indeed in your case it is totally unclear whether one is passing a function
>> pointer or enum value as param.
>> If the value was prefixed with e, it would be clear.
>>
>> Consistency helps with reviewing (and in general code readability) a lot, so
>> the less we have optional coding style rules, or
>> context dependent rules, the better.
>>
>>
>>
>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.or

Re: Coding style for C++ enums

2016-04-08 Thread Jeff Gilbert
It's noisy in code you *do* understand, which is the bulk of the code
we should be dealing with the majority of the time.

I do not parse this initially as a type because that generally doesn't
make sense given context.
Generally the names involved are also unlikely to be types, based on name alone.

It would also add to our already-pungent code smell.

On Fri, Apr 8, 2016 at 3:03 PM, Mats Palmgren  wrote:
> On 2016-04-08 23:03, Jeff Gilbert wrote:
>>
>> Strong preference against eFoo, here. :)
>
>
> Strong preference *for* eFoo, here. :)
>
> If I see Bar::Foo anywhere in code I'm not familiar with, my brain
> is likely to first parse that as a type before realizing that, hmm
> that doesn't make sense in an expression context, and then I will
> likely have to lookup what that silly Bar thing is just to be sure.
>
> eFoo is unambiguous and utterly clear.
>
> /Mats
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Build System Project - Update from the last 2 weeks

2016-04-08 Thread Jeff Gilbert
I thought Linux did LTO but not PGO?

On Tue, Apr 5, 2016 at 3:53 PM, Mike Hommey  wrote:
> On Tue, Apr 05, 2016 at 09:02:09PM +0100, David Burns wrote:
>> Below is a highlight of all work the build peers have done in the last 2
>> weeks as part of their work to modernise the build infrastructure.
>>
>> Since the last report[1] a large number of improvements have landed in
>> Mozilla Central.
>>
>> The build system now lazily installs test files. Before, the build copied
>> tens of thousands of test and support files. This could take dozens of
>> seconds on Windows or machines with slow I/O. Now, the build system defers
>> installing test files until they are needed there (e.g. when running tests
>> or creating test packages). Furthermore, only the test files relevant to
>> the action performed are installed. Mach commands running tests should be
>> significantly faster, as they no longer examine the state of tens of
>> thousands of files on every invocation.
>>
>> After upgrading build machines to use VS2015, we have seen a decrease in
>> build times[2] for PGO on Windows by around 100 minutes. This brings PGO
>> times on Windows in line with that of PGO(Strictly speaking this is LTO)
>> times on Linux.
>
> Just a nit: strictly speaking Windows builds do PGO+LTO, Linux builds do
> PGO, but not LTO.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style for C++ enums

2016-04-08 Thread Jeff Gilbert
Strong preference against eFoo, here. :)

Just use enum classes.

On Fri, Apr 8, 2016 at 10:35 AM, smaug  wrote:
> On 04/08/2016 07:38 PM, Nick Fitzgerald wrote:
>>
>> On Fri, Apr 8, 2016 at 9:29 AM, Birunthan Mohanathas <
>> birunt...@mohanathas.com> wrote:
>>
>>> On 8 April 2016 at 18:10, Kartikaya Gupta  wrote:

 Others?
>>>
>>>
>>> enum class OptionD {
>>>SentenceCaseValues,
>>>ThisWontBeConfusedWithOtherThings
>>>// ... because you need to use
>>> OptionD::ThisWontBeConfusedWithOtherThings
>>> };
>>>
>>
>> Strong +1 for enum classes. That way we don't need any more nasty
>> prefixing
>> and can
>> instead
>> rely on the compiler
>>
>> and type system.
>>
>
> enum classes are great, but I'd still use prefix for the values. Otherwise
> the values look like types - nested classes or such.
> (Keeping my reviewer's hat on, so thinking about readability here.)
>
>
>
> -Olli
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposing preferring Clang over GCC for developer buidls

2016-03-02 Thread Jeff Gilbert
On Wed, Mar 2, 2016 at 3:45 PM, Mike Hommey  wrote:
> More importantly, changing the official toolchain has implications on
> performance.

Sorry, I meant for general automation. Our final spins (especially
LTO/PGO builds) should remain whatever gives us maximum perf. (not
making any claims myself here!)

Our PGO/LTO builds can take 10x+ what our normal integration builds
take if it nets us a few percentage points of runtime perf.

I suppose it becomes a question of divergence between fast-building
builds and 'final' PGO/fully-optimized builds. We already have this to
some extent with PGO vs non-PGO builds.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposing preferring Clang over GCC for developer buidls

2016-03-02 Thread Jeff Gilbert
For standard development builds, --enable-debug build speed is (to me)
the primary motivator since we've guaranteed that they're equally
correct. (within reason) I'll gladly run some extra tests to gather
data about this.

FWIW, with a 26% speedup, it would definitely seems like it'd be worth
investigating also for automation. Is anyone/anybug driving that?

On Wed, Mar 2, 2016 at 2:50 PM, Gregory Szorc  wrote:
> Over in bug 1253064 I'm proposing switching developer builds to prefer
> Clang over GCC because the limited numbers we have say that Clang can build
> mozilla-central several minutes faster than GCC (13 minutes vs 17.5 on my
> I7-6700K). I'm not proposing switching what we use to produce builds in
> automation. I'm not proposing dropping GCC support. This is all about
> choosing faster defaults so people don't spend as much time waiting for
> builds to complete and become more productive as a result.
>
> Please comment on the bug if you have something to add.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A list of easy-ish gfxContext-to-DrawTarget conversions

2016-01-12 Thread Jeff Gilbert
For a similar situation, we used an etherpad to allow painless
acquisition of pending todos. Just move the instances you want
'assigned' to you into a  section in the etherpad. It's not
perfect, but it's an alternative to responding to the email, and makes
it easy to see the state of things.

On Tue, Jan 12, 2016 at 2:54 PM, Nicholas Nethercote
 wrote:
> Greetings,
>
> Over the past few weeks I have spent some time doing gfxContext-to-DrawTarget
> conversions. (Also nsRendering-to-DrawTarget conversions, which are basically
> the same because nsRenderingContext is a wafer-thin wrapper around 
> gfxContext.)
>
> The number of occurrences of gfxContext and nsRenderingContext is now about
> 20% lower than when I started (it went from 1813 to 1428), but I now need to
> work on other things. Below is a list of cases that I didn't get to, but which
> look like they might be easy-ish for people with some Thebes/Moz2D
> experience.
>
> If you do work on any of these, it might be worth replying to this message to
> indicate this, to reduce the chance of two people duplicating work.
>
> - gfxAlphaBoxBlur::Init(): the return value.
>
> - gfxAlphaBoxBlur::BlurInsetBox() + FillDestinationPath(): the parameters.
>
> - gfxAlphaBoxBlur::Paint() + DrawBlur(): the parameters.
>
> - nsNativeThemeWin::DrawWidgetBackground: the parameter, in tandem with
>   changing gfxWindowsNativeDrawing::mContext and
>   gfxWindowsNativeDrawing::BeginNativeDrawing().
>
> - nsCSSRendering::PaintGradient(): the parameter.
>
> - DocumentRenderParent::mCanvasContext.
>
> - ClipToContain() in BasicLayerManager.cpp: the parameter.
>
> - PaintLayerContext::mTarget.
>
> - InstallLayerClipPreserves3D() in BasicLayerManager.cpp: the parameter.
>
> - PaintWithMask() in BasicLayersImpl.cpp: the parameter.
>
> - gfxTextContextPaint::InitStrokeGeometry(): the parameter.
>
> - imgFrame::SurfaceForDrawing(): the parameter.
>
> - nsLayoutUtils::GetSnappedBaseline{X,Y}(): the parameter.
>
> - ComputeSnappedImageDrawingParameters(): the parameter.
>
> - The Android instance of nsPluginInstanceOwner::Paint(): the parameter.
>
> - nsSVGUtils::SetClipRect(): the parameter.
>
> - nsSVGUtils::Make{Fill,Stroke}PatternFor(): the parameter.
>
> - BufferAlphaColor::mContext.
>
> - DrawBackgroundColor() in nsCSSRendering.cpp: the parameter.
>
> - SetupBackgroundClip() in nsCSSRendering.cpp: the parameter.
>
> - MustCreateSurface() in ClippedImage.cpp.
>
> - nsCSSRendering::PaintGradient(): the nsRenderingContext parameter.
>
> - DumpFrames() in nsPrintEngine.cpp: this is commented-out code and it won't
>   compile because the nsRenderingContext is used in a bogus way (the call to
>   IsVisibleForPainting() has too many arguments). Perhaps all the
>   EXTENDED_DEBUG_PRINTING code can be removed? It looks ancient.
>
> - Three CreateReferenceRenderingContext() call sites only require a
>   DrawTarget (nsTextBoxFrame::DrawText(), CreateReferenceDrawTarget(),
>   LazyReferenceRenderingDrawTargetGetterFromFrame::GetRefDrawTarget()), so
>   creating a gfxContext is overkill.
>
> Finally, if I could choose a single function to magically Moz2Dify, it would 
> be
> AppUnitWidthOfStringBidi(). Changing its nsRenderingContext parameter to a
> DrawTarget would allow huge swathes of functions to be changed similarly, as
> well as nsBoxLayoutState::mRenderingContext and nsCSSOffsetState::rendContext.
> But it's a hard case; the embedded gfxContext end up being passed through
> nsFontMetrics::DrawString() to gfxTextRun::Draw() where it is put into a
> TextRunDrawParams and gets used in lots of complicated ways.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-07 Thread Jeff Gilbert
I think rather we should aim to provide a way for the application to
implement custom remapping of events. A simple version of this is
allowing the app to synthesize enough of the inputs to virtually use
our existing UI. I think this is much simpler than dredging around in
the WebGL API.

On Thu, Jan 7, 2016 at 4:12 PM, Robert O'Callahan  wrote:
> On Fri, Jan 8, 2016 at 11:08 AM, Jeff Gilbert  wrote:
>>
>> On Wed, Jan 6, 2016 at 8:07 PM, Luke Wagner  wrote:
>> > FWIW, I was wondering if we could go a step further and allow
>> > (optional) user interaction with the rendered DOM elements.  That way
>> > you could, say, select text on a 3d surface with a mouse or use an
>> >  tag.  It seems like this would be possible if the vertex/pixel
>> > shaders were constrained to, say, affine transformations which starts
>> > to sound rather similar to the whitelisting approach mentioned
>> > elsewhere in this thread for mitigating timing attacks.
>>
>> I think this should be handled outside of the WebGL API. This seems to
>> be a bunch of extra work and complexity to handle a relatively narrow
>> use-case. I believe it's much easier to just provide functionality in
>> the platform to allow a library to implement such specific uses.
>
>
> Some features are always going to be really hard to provide with fully
> custom UI. For example , for security reasons. Also full
> custom UI for text entry is practically impossible to implement on par with
> browser built-in UI, due to the need to integrate with system IMEs,
> assistive tech, spellchecking dictionaries, etc.
>
> That's why I think finding ways to integrate live Web content into WebGL
> scenes would be really valuable.
>
> Rob
> --
> lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
> toD
> selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> rdsme,aoreseoouoto
> o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea lurpr
> .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
> esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-07 Thread Jeff Gilbert
On Wed, Jan 6, 2016 at 8:07 PM, Luke Wagner  wrote:
> FWIW, I was wondering if we could go a step further and allow
> (optional) user interaction with the rendered DOM elements.  That way
> you could, say, select text on a 3d surface with a mouse or use an
>  tag.  It seems like this would be possible if the vertex/pixel
> shaders were constrained to, say, affine transformations which starts
> to sound rather similar to the whitelisting approach mentioned
> elsewhere in this thread for mitigating timing attacks.

I think this should be handled outside of the WebGL API. This seems to
be a bunch of extra work and complexity to handle a relatively narrow
use-case. I believe it's much easier to just provide functionality in
the platform to allow a library to implement such specific uses.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jeff Gilbert
On Mon, Jan 4, 2016 at 4:46 PM, Robert O'Callahan  wrote:
> On Tue, Jan 5, 2016 at 10:46 AM, Kearwood "Kip" Gilbert <
> kgilb...@mozilla.com> wrote:
>
>> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
>> in front of the user.  Additionally, we often wish to show 2d graphics,
>> video, and CSS animations as a texture in 3d scenes.  Creating these
>> textures is something that CSS and HTML are great at.
>>
>> Unfortunately, I am not aware of an easy and efficient way to capture an
>> animated of an interactive HTML Element and bring it into the WebGL
>> context.  A "moz-element" -like API would be useful here.
>>
>> Perhaps we could solve this by implementing and extending the proposed
>> WEBGL_dynamic_texture extension:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
>
> This proposal seems unnecessarily complex. Is there a way for me to send
> feedback?

Yeah, we have a public mailing list: public_we...@khronos.org
As with anything WebGL related, you can also just talk to me about it.

>
> Essentially, we would extend the same API but allow the WDTStream
>> interface to apply to more HTML elements, not just HTMLCanvasElement,
>> HTMLImageElement, or HTMLVideoElement.
>>
>> We would also need to implement WEBGL_security_sensitive_resources to
>> enforce the security model:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
>
> I wish I'd known about this proposal earlier! This looks pretty good,
> though I'd always thought this would be too complicated to spec and
> implement to be practical. Glad to be wrong! Although I think we should get
> as much feedback as possible on this in case of hidden gotchas.

Historically in our investigations of this, it seemed very very hard
to guarantee a lack of timing vectors, even just for arithmetic.
(Particularly since we're handing the sources to the driver, which
will try to optimize away as much as it can)

>
> Does this sound like a good idea?  I feel that this is something that
>> all WebGL developers would want, as it would make building front-ends
>>
> for games much easier.
>>
>
> Yes, I think together these would be very useful.

It only helps some games. Most game development efforts will likely be
unaffected, since it's generally easiest to use an existing engine,
which will already have such things handled.

Most games will either just use overlayed HTML, or have an in-engine
solution for UI. The desire to embed web elements in a scene itself is
relatively rare. (Certainly few existing games do this)

>
> If others feel the same, I would also like to follow up with a proposal
>> to make the captured HTML elements interactive through use of an
>> explicit "pick buffer" added to canvases.
>>
>
> How would that work? Being able to synthesize mouse (touch?) events in HTML
> elements would add another set of issues.
>
> I assume the idea of mixing CSS 3D-transformed elements into a WebGL scene
> has been rejected for some reason?

This can't reasonably be done given the level of abstraction provided
by GL APIs.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jeff Gilbert
WEBGL_dynamic_texture is due for a pruning refactor (I think I'm on
the hook for this), so don't base anything on it as-is.

IIRC, we don't believe WEBGL_security_sensitive_resources is viably
implementable in a safe way (timing attacks!), so I suggest ignoring
it.

Extending texture uploads to allow dom::Element uploads is easily done
from a technical standpoint, but doing it efficiently will take novel
non-trivial work. (not every dom::Element has a Layer/Image)

Adding picking to WebGL is a non-starter.

>From an API standpoint, it could be interesting to try to use
ImageBitmaps as handles to snapshots of dom::Elements.

I think that it would be most efficient just to have a meeting about
these topics, instead of iterating slower via email.

-Jeff

On Mon, Jan 4, 2016 at 1:46 PM, Kearwood "Kip" Gilbert
 wrote:
> Hello All,
>
> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
> in front of the user.  Additionally, we often wish to show 2d graphics,
> video, and CSS animations as a texture in 3d scenes.  Creating these
> textures is something that CSS and HTML are great at.
>
> Unfortunately, I am not aware of an easy and efficient way to capture an
> animated of an interactive HTML Element and bring it into the WebGL
> context.  A "moz-element" -like API would be useful here.
>
> Perhaps we could solve this by implementing and extending the proposed
> WEBGL_dynamic_texture extension:
>
> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
> Essentially, we would extend the same API but allow the WDTStream
> interface to apply to more HTML elements, not just HTMLCanvasElement,
> HTMLImageElement, or HTMLVideoElement.
>
> We would also need to implement WEBGL_security_sensitive_resources to
> enforce the security model:
>
> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
> Does this sound like a good idea?  I feel that this is something that
> all WebGL developers would want, as it would make building front-ends
> for games much easier.
>
> If others feel the same, I would also like to follow up with a proposal
> to make the captured HTML elements interactive through use of an
> explicit "pick buffer" added to canvases.
>
> I look forward to your feedback.
>
> Cheers,
>   - Kearwood "Kip" Gilbert
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-25 Thread Jeff Gilbert
On Wed, Nov 25, 2015 at 3:16 AM, Till Schneidereit
 wrote:
> FWIW, I received questions about this via private email and phone calls
> from two people working on extensions that support their products. Their
> extensions sit in the review queue with not chance of getting through it
> before the signing requirement kicks in. This puts them into a situation
> where their only reasonable course of action is to advise their users to
> switch browsers.
>

Is it just me, or does this sounds completely unacceptable. Sloughing
more users? Things like this are why it's hard not to be cynical.

I doubt anyone is going to switch to Firefox because our extension
signing is safe. (though I do think we should have some form of
signing) But they will gladly switch away when anything breaks,
particularly when we reduce the activation energy needed to switch: If
their extension won't work in new Firefox, it doesn't matter so much
that they won't have that extension in, say, Chrome.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-16 Thread Jeff Gilbert
On Thu, Jul 16, 2015 at 6:50 AM, Ehsan Akhgari 
wrote:

> On 2015-07-15 6:47 PM, Jeff Gilbert wrote:
>
>> "Arg warts improve backtracking for debugging"
>> Regardless of the validity of  "Arg warts help illustrate information
>> flow", the use-case of backtracking to 'origin' of aFoo is also
>> unconvincing, since you'd only need to go back once more than previously
>> before moving once back 'up'. Either way sounds eminently automateable.
>> -> No practical change, thus irrelevant.
>>
>
> I don't understand what you're saying here at all.  It seems that you're
> saying that this use case is not interesting to you, and that's OK, but
> that doesn't make it irrelevant.  :-)
>

Not at all! Rather, that either way it would be workable. The tiny extra
bit of effort it would take without aFoo is completely solved by
automation, and if this is a common enough use-case for this tiny extra bit
of effort to matter, it should *already* have been automated. Thus I don't
have a ton of sympathy for making a readily-automateable process require
differentially more effort without automation.


>
>  "It's hard to implement"
>> This is irrelevant for new code, and bulk-change is feasible once we have
>> -Wshadow. (which isn't too hard. We shouldn't have shadowing in our code
>> anyway, so we want -Wshadow regardless of this discussion) Besides, a
>> number of our internal projects and modules already don't have it, so it's
>> free for them.
>> -> Non-trivial, but not too hard to implement.
>>
>
> The "isn't too hard" part of the above is factually incorrect.  See bug
> 563195 for why.
>

I have read it, and it doesn't seem insurmountable. Just because it's not
free doesn't make it 'too hard'. I am naturally willing to work on this
myself.


>
>  So, when you say "I like aFoo because it lets me know which vars are
>> args",
>> is there any un-touched-on aspect of the information "this var is an
>> argument" that is useful to know? How does every other project survive
>> without it?
>>
>
> Given Benjamin's response, there is really no point in arguing over this
> any more, but just as a thought exercise: is there anything that would
> convince you that this is useful for some people?


There's only no point here if Benjamin's proposal carries! Besides, this
discussion is certainly a subset of the boil-the-ocean discussion. If aFoo
is truly valuable, then removing it via Google Style would hurt us just as
removing it a la carte would.

What could convince me? Reasoned and consistent arguments, perhaps examples
or anecdotes where aFoo really came in handy that wouldn't have been
covered under general good code style. Maybe someone might have done a
study on this style. Maybe there are other large projects which use this
style. Tautologically: Convincing arguments!


> I would expect having those people tell you that it's useful for them
> should be a good enough reason to assume that it is!
>

Especially as part of standards body, I'm perfectly fine with dismissing
requests for things people claim to want, but can't provide sufficient
reasoning for, [1] or likewise being unable to argue my own case
successfully. I'm well-acquainted with being convinced to change my
position through discussion and debate, and also with convincing others.
(or not!)

This is why we have these discussions! Claims are not substantiated simply
because people claim them to be true. We need to lay the reasons on the
table and then weigh them, as without reasons, choices are arbitrary.
Sometimes we even need to do it more than once, as time changes things.
(even if people hate doing this!) We may find no need for change, or we may
instead find we can change for the better.

[1]: To be clear, "sufficient reasoning" can definitely include "culture",
"harsh realities", "pragmatism", and other such things. They are not all
granted the same gravity, but they all contribute.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-15 Thread Jeff Gilbert
On Tue, Jul 14, 2015 at 7:55 AM, Thomas Zimmermann 
wrote:

> The discussion has a number of good points in favor of using 'a', but I
> missed convincing arguments in favor of not using 'a'. Are there any? I
> don't consider "I don't get what 'a' is good for" a convincing argument.
>

On the other hand, I'm still unconvinced by the pro-'a' arguments I've read
here. Besides roc's point about refactoring, the argument against aFoo is
mainly about whether the information added is worth the noise. Adding
information is not always worth it, since useless information is noise.
(and increases cognitive load) The extremes are "variables are always bare"
(no mFoo/gFoo/sFoo/kFoo) and "extreme Hungarian notation". (where each
variable name references its scope, type, and favorite color) The former is
missing very valuable information (implicit member vars!), whereas the
latter definitely gets in the way.

The primary argument here is whether the information aFoo adds is worth the
increased cognitive load. (both of holding the information and parsing the
'real name' out of the annotated variable name)

I hold that aFoo does not add useful information, since outparams should be
standardized to something obvious and distinct from aFoo, and I otherwise
treat arguments as locals within a function.


Let's divide things up for clarity:

"Args should be immutable"
Clearly other people here treat arguments differently, in one instance
where the recommendation is that we should not to assign to them. (in which
case they should really all just be const) This seems strange to me though,
since I don't see why arguments should be immutable moreso than all
variables should be as immutable as possible. Unilaterally requiring that
arguments be const seems at odds with acknowledging that mutability useful.
(And regardless, if we want immutable args, we should enforce immutable
args with const, not just name them differently)
-> Wrong solution for the illustrated (and not-agreed-upon) problem.

"Arg warts help illustrate information flow"
If we instead don't require that args must be const, then args seems little
different than intermediary locals derived from args. A subsequent call or
result of a function may in fact be (eventually) purely derived from args,
yet not visibly pure in that it references non-arguments. If we're tracing
information flow, shouldn't the a-prefix be contagious for any local
derived purely from existing a-prefixed vars? This seems like a distraction
to me. Can't we just read the functions instead of using an unreliable
heuristic?
-> Does an unreliable heuristic warrant having a-warts? I don't think so,
but this is likely the most contentious aspect.

"Arg warts improve backtracking for debugging"
Regardless of the validity of  "Arg warts help illustrate information
flow", the use-case of backtracking to 'origin' of aFoo is also
unconvincing, since you'd only need to go back once more than previously
before moving once back 'up'. Either way sounds eminently automateable.
-> No practical change, thus irrelevant.

"It's hard to implement"
This is irrelevant for new code, and bulk-change is feasible once we have
-Wshadow. (which isn't too hard. We shouldn't have shadowing in our code
anyway, so we want -Wshadow regardless of this discussion) Besides, a
number of our internal projects and modules already don't have it, so it's
free for them.
-> Non-trivial, but not too hard to implement.


So, when you say "I like aFoo because it lets me know which vars are args",
is there any un-touched-on aspect of the information "this var is an
argument" that is useful to know? How does every other project survive
without it?

A convincing argument is one that would have been a good reason to switch
to aFoo in the first place. Without this, we don't have reasons, we just
have momentum.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switch to Google C++ Style Wholesale (was Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++)

2015-07-15 Thread Jeff Gilbert
On Tue, Jul 14, 2015 at 9:17 PM, Nicholas Nethercote  wrote:

> On Tue, Jul 14, 2015 at 8:06 PM, Bobby Holley 
> wrote:
> > I'm not wild about this idea.
>
> It's such a boil-the-ocean solution I honestly thought bsmedberg was
> joking at first...
>

Well consistency is a major concern, so as long as the oceans are well and
truly boiled...

I'm certainly no fan of snake_case, but the literature does say it's more
readable, and their data is better than my anecdotes.

This Modest Proposal has my vote.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-13 Thread Jeff Gilbert
On Sun, Jul 12, 2015 at 6:45 PM, Thomas Zimmermann 
wrote:

> Am 08.07.2015 um 16:36 schrieb smaug:
> > Do you actually have any data how many % of Gecko devs would prefer
> > not using aFoo?
>
> I strongly prefer 'aFoo' over 'foo' for the extra context that it gives
> to the variable. If we want to change anything we should rather
> introduce a separate prefix for output parameters.
>

Which part of this extra context is useful?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-13 Thread Jeff Gilbert
On Sun, Jul 12, 2015 at 10:45 PM, Nicholas Nethercote <
n.netherc...@gmail.com> wrote:

> On Thu, Jul 9, 2015 at 7:01 PM, Jeff Gilbert  wrote:
> >
> > Arguments have the same lifetimes as locals, and the only exceptions to
> > this apply to both args and locals. (references and pointers)
>
> Maybe I've misunderstood what you're saying here, but locals are
> frequently block-scoped which gives them a different lifetime to args.
>

Right, args have lifetimes of function-block-scope locals.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-09 Thread Jeff Gilbert
On Wed, Jul 8, 2015 at 7:48 AM, smaug  wrote:

> Another example where aFoo tends to be rather useful is lifetime
> management.
> If I see aFoo being used somewhere in a method after some unsafe method
> call
> (layout flush, any script callback handling, event dispatch, observer
> service notification etc.),
> I know that I need to check that the _caller_ follows COM-rules and keeps
> aFoo object alive during the method call.
> With non-aFoo variables I know the lifetime is controlled within the
> method.
>

Arguments have the same lifetimes as locals, and the only exceptions to
this apply to both args and locals. (references and pointers)
There are the same problems with locals as there are with args, as args are
just locals passed into the function.

FWIW, I hacked a python script that does s/aF[Oo]o/foo/, and MSVC does has
a code analysis mode that supports an error-on-warn for shadowed variable
names.

I can take a look at how bad the shadowing is in the tree.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Mon, Jul 6, 2015 at 8:26 PM, Mike Hommey  wrote:

> The existence of aFoo goes along with the existence of mFoo, sFoo, kFoo,
> and others I might have forgotten. Not that I particularly care about
> aFoo, but why strike this one and not the others?[1] It's not like they
> have widespread use in the industry either. That is, the same reasoning
> could be applied to them, yet, you're stopping at aFoo. Why?


mFoo and sFoo have very different scopes compared to locals, so calling
them out is useful.
kFoo makes it clear that the variable is constant, and has connotations
regarding it being a hardcoded limit or value.
Note that commonly kFoo is actually a static constant. Immutable (`const`)
locals are often not kPrefixed.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 6:06 PM, Karl Tomlinson  wrote:

> Jeff Gilbert writes:
>
> > I work with a number of these, but after a page or two, why is it at all
> > relevant which vars were args? For information flow? Should we mark
> locals
> > that purely derive from args as `aFoo` as well? Long functions (which
> have
> > poor readability anyway) generally have so much going on that the trivia
> of
> > which vars are args does not seem very useful..
> >
> > I do not see how `aFoo` helps here, so please expand on this.
>
> A simple variable name, such as "font" for example, may identify
> any of a number of fonts.  Such simple names, without any
> qualifier in the name, are often used in loops, for example,
> because it is the most important font in the immediate context.
>
> However a simple variable may also be used in a parameter list
> because when looking at the parameter list it is obvious which
> font is relevant in the interface.
>
> That means that if "font" is seen in the body of a function, the
> first question that arises is "which font?"
>

If it's named well, there should be no question which it refers to, with or
without argument decoration.

>
> If the variable is called "aFont" then we know which font because
> we know what function we are in.
>

Use aFont if it helps, just as we use iFoo and fFoo sometimes when doing
conversions.
Don't require it though.
In particular, `newSize` is better than `aSize` when also dealing with
mSize. Inferring the meaning of a variable from its status as an argument
is a crutch for poor variable naming. (and adds mental load)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 5:41 PM, Karl Tomlinson  wrote:

> Jeff Gilbert writes:
>
> > It can be a burden on the hundreds of devs who have to read and
> understand
> > the code in order to write more code.
>
> Some people find the prefix helps readability, because it makes
> extra information immediately available in the code being
> examined, while you are indicating that this is a significant
> burden on readability.
>
> Can you explain why the extra letter is a significant burden?
>

Because extra noise is being forced into variable names for minimal
benefit. Every declaration is a sea of extra 'a's. Refactoring code means
doing a lot of s/aFoo/foo/ and vice-versa. Reading each arg name requires
first passing over 'a' before getting to anything relevant. Often this
means that short function bodies can have every fifth or sixth letter being
'a'.

Why does it matter that they're arguments? Arguments are just locals that
are passed into functions. (outparams are a different beast already
addressed elsewhere)

I would like to reiterate that *we are unusual* in our present preference
(not to mention requirement) for this in our style guideline. I'm not
proposing a change to something bizarre. I'm proposing the removal of an
extremely unusual style requirement.

If the 'a' prefix is a burden then the 'm' prefix must be also,
> and so we should be using this->member instead of mMember.
>

No, as this provides huge benefit by indicating when we are referencing a
member variable, which differ greatly from local variables in scope.
Arguments have the same scope as locals. There is benefit in mFoo,
particularly compared to requiring this->foo, which I don't think we even
have compiler or linter support for, and would clearly be superfluous in
terms of extra characters.

>
> > The opinions of a few over-harried
> > reviewers should not hold undue sway over the many many devs writing
> code.
>
> unless people want code to be reviewed.
>

`aFoo` is never going to make or break our ability to do code review. Low
code-review bandwidth is all but completely orthogonal to this discussion.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 3:59 PM, Eric Rahm  wrote:

> I'm not a huge fan of the 'aFoo' style, but I am a huge fan of
> consistency. So if we want to change the style guide we should update our
> codebase, and I don't think we can reasonably do that automatically without
> introducing shadowing issues.
>

MSVC 2013 (which I believe is our main windows compiler right now) will
error during compilation if such a shadowing issue arises. Thus, if the
code compiles there, `aFoo`->`foo` is safe. I would be very surprised if
GCC or Clang didn't have an equivalent option.


> Additionally I don't spend 50% of my time reviewing, so I'd say my opinion
> here (meh to aFoo) is less important. It's not an undue burden for me to
> include an aPrefix and if we have static analysis to check for it that
> would make it even less of an issue.
>

It can be a burden on the hundreds of devs who have to read and understand
the code in order to write more code. With the exception of a couple
people, review is not the bottleneck. The opinions of a few over-harried
reviewers should not hold undue sway over the many many devs writing code.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
Outvars are good candidates for having markings in the variable name.
`aFoo` for all arguments is a poor solution for this, though.

On Tue, Jul 7, 2015 at 1:22 PM, smaug  wrote:

> On 07/07/2015 11:18 PM, Jeff Gilbert wrote:
>
>> On Tue, Jul 7, 2015 at 1:03 PM, smaug  wrote:
>>
>>  As someone who spends more than 50% of working time doing reviews I'm
>>> strongly against this proposal.
>>> aFoo helps with readability - reader knows immediately when the code is
>>> dealing with arguments.
>>>
>>>
>> When and why is this useful to know?
>>
>>
>
> Most common case in Gecko is to know that one is assigning value to
> outparam.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >