Re: mach has landed

2012-10-02 Thread Jeff Gilbert
As an additional data point, my experience is that the interactivity of my 
machine is not noticeably impacted when I overcommit with -j12 on my 
4core/8thread i7 windows machine. Task manager shows the cores often pegged at 
100%, but the machine basically behaves normally. Neither is my Ubuntu 
8core/16thread/-j16 (sometimes -j24) linux box noticeably affected when it's 
fully pegged.

Clearly it depends what you're trying to do, but for my uses, I just spin up a 
build and do other things normally until it's done, regardless of how 
over-committed it is.

- Original Message -
From: Karl Tomlinson mozn...@karlt.net
To: dev-platform@lists.mozilla.org
Sent: Tuesday, October 2, 2012 3:05:55 PM
Subject: Re: mach has landed

On Fri, 28 Sep 2012 11:44:56 -0700, Gary Kwong wrote:

 http://blog.johnford.org/new-mac-builders-ssds-j-settings/

 Quoted from that blog:

 I did find that it is better to set the -j setting too high than
 it is to set it too low.

 -Gary

Better in terms of build time, which is the right metric for a
dedicated build machine.

But bear in mind that a developer is usually trying to use the
machine at the same time as doing a build.  Adding more context
switching load to the system without significantly improving build
times is not necessarily a win for the developer.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-01 Thread Jeff Gilbert
Could this be why I'm starting to see banner ads in some of my newtab-page 
snapshots, despite never seeing them in a tab? I don't appear to see this on an 
old nightly24 snapshot I have lying around.

-Jeff

- Original Message -
From: Gavin Sharp ga...@gavinsharp.com
To: dev-platform dev-platform@lists.mozilla.org
Cc: firefox-dev firefox-...@mozilla.org
Sent: Tuesday, July 30, 2013 4:10:34 PM
Subject: reminder: content processes (e10s) are now used by desktop Firefox

I've mentioned this at the engineering meeting, but thought it worth a note
here just to ensure everyone is aware:

Bug 870100 enabled use of the background thumbnail service in Firefox
desktop, which uses a browser remote=true to do thumbnailing of pages in
the background.

That means that desktop Firefox now makes use of E10S content processes.
They have a short life time (one page load) and are generally triggered by
opening about:newtab when thumbnails are missing or out of date (2 days
old).

This has exposed some e10s crashes that previously weren't exposed on
desktop. I've filed https://bugzilla.mozilla.org/show_bug.cgi?id=899758 to
track them - please hang any other such crashes off that bug. If you're
working in a component that has e10s-related crashes, please fix them :)

(Bug 891218 is also planning to make use of content processes for some
Social-related functionality. Those remote processes will be longer-lived,
typically having the same lifetime as the parent process.)

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Jeff Gilbert
It's certainly worrying given the number of security- and privacy-related 
addons people rely on working. Seeing ads in thumbnails is relatively harmless 
(if disconcerting), but if someone is relying on an addon for important 
security or privacy reasons, and we auto-updated them and bypassed their 
protections, that's more serious.

-Jeff

- Original Message -
From: Philip Chee philip.c...@gmail.com
To: dev-platform@lists.mozilla.org
Sent: Friday, August 2, 2013 12:30:29 PM
Subject: Re: reminder: content processes (e10s) are now used by desktop Firefox

On 02/08/2013 16:57, t...@adblockplus.org wrote:

 The code in question was explicitly running in Firefox Mobile only.
 It used messageManager.loadFrameScript() API to inject code into the
 content process of new tabs - I doubt that it will work the same
 here, Adblock Plus would probably need to look explicitly for these
 browser remote=true elements (is there an event when they are
 created?).
 
 Altogether, supporting this in Adblock Plus should be possible - but
 it will require significant amounts of additional code and introduce
 quite a bit of new complexity. I also have doubts whether this is
 work that should receive priority.

It has just occurred to me that Flashblock would probably be affected
similarly.

Phil

-- 
Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: vsync proposal

2013-08-13 Thread Jeff Gilbert
SwapBuffers w/SwapInterval(1) can block (or not) whenever it wants to, 
basically. With SwapInterval(0), I'm not sure we can guarantee we won't have 
tearing, (except on platforms which don't tear) since we don't really know how 
long SwapBuffers will take, or when vblank is. (This can probably be done for 
each platform independently, though) 

I'm fairly confident that accumulated RAFs should be triggered after each 
vblank where we composite. (even probably when we choose not to composite 
because nothing changed)
This allows the users of RAF to get any sort of 
vsync/no-vsync/sync-control/low-latency that is needed.

Bas is also correct about lowering latency by waiting until closer to vblank, 
and continuing to update the data the compositor renders from, instead of just 
trying to finish a frame as soon as possible. The proposal reads to me to 
indicate that low-latency stuff should be handled earlier, but this is only 
true if you would otherwise miss the vblank window. What Bas described is 
exactly what we want, to my knowledge.

We should definitely prioritize low-latency for the (any?) window which is 
receiving user input, though. This may mean paradoxically doing the stuff we 
care less about *before* the stuff we want to be low-latency. (With some 
padding for safety, of course)

-Jeff


- Original Message -
From: Robert O'Callahan rob...@ocallahan.org
To: Bas Schouten bschou...@mozilla.com
Cc: Jeff Muizelaar jmuizel...@mozilla.com, Vladimir Vukicevic 
vladi...@mozilla.com, dev-platform@lists.mozilla.org, avih...@yahoo.com
Sent: Tuesday, August 13, 2013 6:25:03 PM
Subject: Re: vsync proposal

 On Wed, Aug 14, 2013 at 12:13 PM, Bas Schouten bschou...@mozilla.comwrote:

 Since you want composition to occur whenever drawing has occurred, I don't
 really think it should really be what's working on doing any processing for
 the vblank event. The VBlank event should just fire, indiscriminately
 whenever VBlank occurs. I believe we should then consume that event on
 whatever thread is actually doing drawing which might want to know about
 this (a user input thread? the main thread?) and these should then decide
 when to do their drawing and if that warrants a new composition. That way
 you could have, just after vblank, all your scheduled drawing be done, and
 a composition scheduled. But then if right before the next vblank a
 panning/zooming event comes in you can still process that and present a new
 frame to minimize latency (if desired).


That would be good in some ways, but it doesn't handle off-main-thread
animation when the main thread's drawing is late (e.g. because the main
thread is stuck doing something else). For that we must have a way to
control composition based on VBlank signals that doesn't involve the main
thread.


 What you could also do is do all your main thread drawing and processing
 rAF things just after vblank, since they'll likely be somewhat time
 consuming, and schedule your composition only for example 5 ms before
 VBlank (since that will likely be fast). You could then make some
 adjustments to the layer tree through the APZC for user input later in the
 frame, and still have those taken into account for composition, that would
 give you a minimal input latency without wasted work and still starting the
 content drawing required as early as possible giving it maximum opportunity
 to make the next frame. This may be a little bit too complicated for a
 first implementation, but I wanted to throw it out there as a suggestion as
 well.


That sounds exactly like the Lowering Latency extension in the wiki page
(if we treat everything as a low latency window). If you agree, maybe we
should just do this.

Rob
-- 
Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: vsync proposal

2013-09-06 Thread Jeff Gilbert
Could you clarify number 2? It's a little dense. 

-Jeff

- Original Message -
From: Robert O'Callahan rob...@ocallahan.org
To: Bas Schouten bschou...@mozilla.com
Cc: Jeff Muizelaar jmuizel...@mozilla.com, Vladimir Vukicevic 
vladi...@mozilla.com, dev-platform@lists.mozilla.org, avih...@yahoo.com
Sent: Thursday, September 5, 2013 7:51:02 PM
Subject: Re: vsync proposal

I had some off-thread discussion with Bas about lowering latency. We seem
to have agreed on the following plan:

On some non-main thread:
1. Wait for vsync event
2. Dispatch refresh driver ticks to all windows that don't already have a
pending refresh driver tick unacknowledged by a layer tree update response
3. Wait for N ms, processing any layer-tree updates (where N =
vsyncinterval - generously estimated time to composite all windows)
4. Composite all updated windows
5. Goto 1

This could happen on the compositor thread or some other thread depending
on how wait for vsync event is implemented on each platform.

Rob
-- 
Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla Angle should enable D3D 11 renderer

2013-09-26 Thread Jeff Gilbert
This is already tracked here:
https://bugzilla.mozilla.org/show_bug.cgi?id=912196

-Jeff

- Original Message -
From: xunxun xunxun1...@gmail.com
To: dev-platform@lists.mozilla.org
Sent: Thursday, September 26, 2013 4:40:03 AM
Subject: Mozilla Angle should enable  D3D 11 renderer

Hi,

ANGLE can use either a backing renderer which uses D3D 11 on systems 
where it is available, or a D3D 9-only renderer.

To enable the D3D 11 renderer:

 1. Open $(BASE)/src/libGLESv2/renderer/Renderer.h
 2. Locate the definition of ANGLE_ENABLE_D3D11, near the head of
the file.
 3. Ensure that the definition of ANGLE_ENABLE_D3D11 is 1 

When the D3D 11 renderer is enabled, ANGLE will use D3D 11 if it is
available at runtime. If it is not, ANGLE will fall back to using a
D3D 9 renderer. 


-- 
Best Regards,
xunxun
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Audit your code if you use mozilla::WeakPtr

2013-10-08 Thread Jeff Gilbert
It would appear to me that you really need to promote this to a strong 
reference while you're using it.
Even `if (foo) foo-bar();` could theoretically have the value of `foo` change 
between the branch and the deref.

Also, do we at least `MOZ_ASSERT(get())` in `operator-`? Yes, it will crash 
immediately anyways, but it's always nicer to have something say 'Hey, you 
tried to deref a null weak-ptr'.

-Jeff

- Original Message -
From: David Major dma...@mozilla.com
To: Ehsan Akhgari ehsan.akhg...@gmail.com
Cc: dev-platform@lists.mozilla.org
Sent: Tuesday, October 8, 2013 4:38:24 PM
Subject: Re: Audit your code if you use mozilla::WeakPtr

Does it? I don't think I'm any more or less likely to omit a validity check 
using operator-() vs get(). Maybe it's just me.

It seems like get() might actually be *more* prone to failure. Imagine:

Class* object = foo.get();
if (object) {
  object-Method();
}
// ... A lot of stuff happens and the ref blows up ...
if (object) {
  object-Method(); // oops
}


- Original Message -
From: Ehsan Akhgari ehsan.akhg...@gmail.com
To: David Major dma...@mozilla.com
Cc: dev-platform@lists.mozilla.org
Sent: Tuesday, October 8, 2013 4:27:03 PM
Subject: Re: Audit your code if you use mozilla::WeakPtr

On 2013-10-08 7:10 PM, David Major wrote:
 Isn't it ultimately up to the developer to get it right? Someone could just 
 as well forget to use |if (object)| from your example.

 Here's a sample usage from the header file:
   *   // Test a weak pointer for validity before using it.
   *   if (weak) {
   * weak-num = 17;
   * weak-act();
   *   }

Sure, but that convenience operator makes it natural to write 
incorrect code by default.  It's better to be explicit and correct, than 
implicit and wrong.  :-)

Cheers,
Ehsan

 - Original Message -
 From: Ehsan Akhgari ehsan.akhg...@gmail.com
 To: dev-platform@lists.mozilla.org
 Sent: Tuesday, October 8, 2013 3:54:17 PM
 Subject: Audit your code if you use mozilla::WeakPtr

 I and Benoit Jacob discovered a bug in WeakPtr (bug 924658) which makes its
 usage unsafe by default.  The issue is that WeakPtr provides convenience
 methods such as operator-, which mean that the consumer can directly
 dereference it without the required null checking first.  This means that
 you can have code like the below:

 WeakPtrClass foo = realObject-asWeakPtr();
 // ...
 foo-Method();

 That will happily compile and will crash at runtime if the object behind
 the weak pointer is dead.  The correct way of writing such code is:

 Class* object = foo.get();
 if (object) {
object-Method();
 }

 I don't know enough about all of the places which use WeakPtr myself to fix
 them all, but if you have code using this in your module, please spend some
 time auditing the code, and fix it.  Please file individual bugs for your
 components blocking bug 924658.

 Thanks!
 --
 Ehsan
 http://ehsanakhgari.org/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Layer Construction for Off Main Thread canvas

2013-10-31 Thread Jeff Gilbert
I have no idea about construction and resizing, but webgl's frame flinging (and 
by extension skia-gl's) should Just Work already.
WebGL content blithely calls SurfaceStream::SwapProducer, and somewhere in 
Layers/TextureClient/Host code there's a call to SurfaceStream::SwapConsumer.
The only thing WebGL needs moved between the client and host size of things is 
the SurfaceStream pointer.

-Jeff

- Original Message -
From: Robert O'Callahan rob...@ocallahan.org
To: Nicolas Silva nical.si...@gmail.com
Cc: Kyle Huey m...@kylehuey.com, Morris Tseng mts...@mozilla.com, Jeff 
Gilbert jgilb...@mozilla.com, Benoit Jacob bja...@mozilla.com, Milan 
Sreckovic msrecko...@mozilla.com, Matt Woodrow mwood...@mozilla.com, 
Nicholas Cameron n...@mozilla.com, dev-platform 
dev-platform@lists.mozilla.org
Sent: Thursday, October 31, 2013 12:54:42 PM
Subject: Re: Layer Construction for Off Main Thread canvas

On Thu, Oct 31, 2013 at 11:19 PM, Nicolas Silva nical.si...@gmail.comwrote:

 With OMTC these days the Layer classes don't hold any logic. All the fancy
 stuff goes into classes inheriting from CompositableClient (and
 CompositableHost on the compositor side). While Layer classes can only be
 manipulated from the main thread and use the PLayerTransaction IPDL
 protocol to sync with the compositor, Compositable classes can be used by
 either the PLayerTransaction or the PImageBridge protocol. The logic to
 attach a Compositable using PImageBridge to a Layer using PLayerTransaction
 is rather simple (look at ImageClientBridge and ImageContainer) and mostly
 implemented in the CompositableClient/Host abstraction so you don't have to
 redo most of it.

 As Nick said you could have on the main thread a dummy CanvasClient
 analogous to ImageClientBridge which just forward an asyncID which is the
 ID used to connect the layer and the compositable on the compositor side,
 and on the ImageBridge thread you would have the usual CanvasClient. Then
 you'd need the equivalent of ImageContainer that would interface with the
 WebGLContext and forward frames to the ImageClient on the ImageBridge
 thread.


This last point is very important. It's very important that CanvasClient
and ImageBridge and stuff like that not exposed outside the layers system.

I think someone from gfx really needs to help with this part.

Rob
-- 
Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-27 Thread Jeff Gilbert
A 64-bit build would, but we don't ship those.
A win64 machine running a 32-bit program can have 4GB of VM, though.

-Jeff

- Original Message -
From: Robert Kaiser ka...@kairo.at
To: dev-platform@lists.mozilla.org
Sent: Wednesday, November 27, 2013 4:35:36 PM
Subject: Re: Reacting more strongly to low-memory situations in Firefox 25

Benjamin Smedberg schrieb:
 On 11/25/13 8:15 PM, Bas Schouten wrote:
 I'm a little confused, when I force OOM my firefox build on 64-bit
 windows it -definitely- goes down before it reaches more than 3GB of
 working set. Am I missing something here?
 I think so. I did not mention working set at all. I am merely
 calculating whether users are running win64 or win32, based on whether
 they have 4G of total VM size (win64) or 2G/3G (win32).

Wouldn't a Win64 Nightly get more than 4G VM size?

KaiRo

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Jeff Gilbert
Erase-remove seems like poor naming, more than anything. I think that iterators 
(pretending things are arrays) are generally really handy. Alternatives 
(needing to do everything with pfns) really tend towards arcaneness and are 
often awkward to use.

In my experience, docs for 'our' containers are also miles behind what the STL 
has available.

We don't *always* need to use the maximally efficient container type. Let 
profiling and knowledge of use-cases show us when we need fast containers. Most 
of the time, I just want simple containers. (particularly for DEBUG-only code)

-Jeff

- Original Message -
From: Robert O'Callahan rob...@ocallahan.org
To: Benoit Jacob jacob.benoi...@gmail.com
Cc: Chris Pearce cpea...@mozilla.com, dev-platform 
dev-platform@lists.mozilla.org
Sent: Tuesday, December 10, 2013 2:04:06 PM
Subject: Re: Can we start using C++ STL containers in Mozilla code?

On Wed, Dec 11, 2013 at 10:50 AM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 2013/12/10 Robert O'Callahan rob...@ocallahan.org

 Keep in mind that proliferation of different types for the same
 functionality hurts developer productivity in various ways, especially
 when
 they have quite different APIs. That's the main reason I'm not excited
 about widespread usage of a lot of new (to us) container types.


 For the same reason as described above, I believe that adopting STL
 containers is the solution, not the problem! The STL shows how to design
 containers that have a sufficiently similar API that, in most cases where
 that makes sense (e.g. between a map and an unordered_map), you can switch
 containers without having to adapt to a different API.


I'm not a fan of the STL's approach of making every data structure feel
like an array. It leads to nonsense like the erase-remove idiom. A remove
function that doesn't actually remove anything, now that's a footgun :-).

Rob
-- 
Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the usefulness of style guides (Was: style guide proposal)

2013-12-19 Thread Jeff Gilbert
It's compromise. I think we can take it as given that we'll never all agree on 
one way to do things, so instead, different modules do their own things. I 
think this is a pretty decent solution, though the problem becomes needing 
different guides for different modules. (the just mime what you see 
is...imperfect)

I posit that there's probably not much benefit to having, say, style for dom/ 
match style for gfx/, since precious-few people often deal with both at once.

-Jeff

- Original Message -
From: Ehsan Akhgari ehsan.akhg...@gmail.com
To: Till Schneidereit t...@tillschneidereit.net, Martin Thomson 
m...@mozilla.com
Cc: Gregory Szorc g...@mozilla.com, Nicholas Nethercote 
n.netherc...@gmail.com, Bobby Holley bobbyhol...@gmail.com, Andrea 
Marchesini amarches...@mozilla.com, Mike Hommey m...@glandium.org, 
dev-platform@lists.mozilla.org
Sent: Thursday, December 19, 2013 2:11:52 PM
Subject: Re: On the usefulness of style guides (Was: style guide proposal)

On 12/19/2013, 12:57 PM, Till Schneidereit wrote:
 I think we should do more than encourage: we should back out for all
 style guide violations. Period. We could even enforce that during upload
 to a review tool, perhaps.

 However. This has to be done on a per-module basis (or even more
 fine-grained: different parts of, e.g., SpiderMonkey have slightly
 different styles). Different modules have vastly different styles,
 ranging from where to put braces over how much to indent to how to name
 fields/ vars/ arguments. I very, very much doubt we'll ever be able to
 reconcile these differences. (Partly because some of the affected people
 from different modules sit in the same offices, and would probably get
 into fist fights.)

See, that right there is the root problem!  Programmers tend to care too 
much about their favorite styles.  I used to be like that but over the 
years I've mostly stopped caring about which style is better, and what I 
want now is consistency, even if the code looks ugly to *me*.  The 
projects which enforce a unified style guideline have this huge benefit 
that their code looks clean and consistent, as if it was all written by 
the same person.  But letting each module enforce its own rules leads to 
the kind of code base which we have now where you get a completely 
different style depending on which directory and file you're looking at. 
  I think trying to enforce this kind of inconsistency with tools is 
strictly worse than what we have today.

If we stepped back for a second and agreed to put aside our personal 
preferences and value consistency more, we could use tools to enforce 
the style guidelines and we'd end up where other code bases that have 
done this are right now.

That all being said, I'm sure there are people who don't value 
consistency as much as I do and are more interested to have the code 
they work on the most look beautiful to them.  I think these two camps 
are at odds with each other and so far the latter camp has won this 
battle.  :/

Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the usefulness of style guides (Was: style guide proposal)

2013-12-19 Thread Jeff Gilbert
How do patches get fully-reviewed if the sum-knowledge of the reviewers doesn't 
include whether the style's right? Alternatively, who is signing off on the 
content of code, having in-depth knowledge of how it works, but not knowing the 
style? That sounds like a different problem, really. (I've also definitely seen 
this, fwiw. A number of times, I've seen mis-styled code that should have 
failed review for other reasons. Not that I'm saying this is a usecase of 
differing styles! Rather, mis-styling correlates with poor review quality)

That aside, what you describe sounds like 'too many' styles, almost 
definitionally, if it's such a major pain. However, there are quantities 
between 'one' and 'many', so I don't think that because 'too many' is clearly 
bad, we have to choose 'one'. It's easy to say we should all use the same 
style before we agree what that ideal style is.

Personally, there are a couple of things I don't like about moz-style (though 
revisions to the central style guide at least have made it better than it used 
to be), but instead of bikeshedding the central style guide, we just do our own 
thing in the code we're responsible for.

If we're going to standardize a certain style for the whole tree, then we're 
going to need to re-discuss a couple of the rules. It's a decent amount of work 
to restyle the modules well, and I'm hardly eager to do work to make this code 
*less* readable for the only people who ever look at it. I'm personally fine 
with bikeshedding over this, but I'm not going to jump on the 'standardize' 
bandwagon if we're just going to be told not to bikeshed later when 
counter-proposals are brought up. There needs to be compromise one some front.

Besides, if style is important enough to standardize, surely it must be 
important enough to fully address. 

The last bit of the problem is that *all of Gecko* is currently 'files with 
existing styles', so I'm not sure how that can mesh with having 'one true 
style' unless we have an initiative to actually convert over all mis-styled 
files.

If we either fail to convert large parts of Gecko, or fail to address concerns 
people have, we're just going to end up with the same style fiefdoms we already 
have.

I suppose my counter-question is 'How does standardizing styles across modules 
help us?' In my experience, reviewing (or being reviewed) for style takes 
almost no time. It's definitely dwarfed by the time I need to spend thinking 
about the changes being made to the code. I think readability improvements (for 
those who actually work on the code in question) are more important than the 
code looking exactly the same as code from a module I've never even opened.

-Jeff

PS: Don't get me wrong: Moz-style has gotten way better recently. I agree with 
almost all of it, but there are a couple of points with which I really don't.

- Original Message -
From: Bobby Holley bobbyhol...@gmail.com
To: Jeff Gilbert jgilb...@mozilla.com
Cc: Ehsan Akhgari ehsan.akhg...@gmail.com, Till Schneidereit 
t...@tillschneidereit.net, Martin Thomson m...@mozilla.com, Gregory 
Szorc g...@mozilla.com, Nicholas Nethercote n.netherc...@gmail.com, 
Andrea Marchesini amarches...@mozilla.com, Mike Hommey 
m...@glandium.org, dev-platform@lists.mozilla.org
Sent: Thursday, December 19, 2013 2:56:28 PM
Subject: Re: On the usefulness of style guides (Was: style guide proposal)

On Thu, Dec 19, 2013 at 2:29 PM, Jeff Gilbert jgilb...@mozilla.com wrote:

 I posit that there's probably not much benefit to having, say, style for
 dom/ match style for gfx/, since precious-few people often deal with both
 at once.


I don't agree at all.

I own XPConnect, which is one of the most stylistically-difficult modules,
because it sits at the intersection of the JS engine and the rest of Gecko,
which have two very-different style guides. XPConnect generally uses JS
style, but there are various Geckoisms that creep in as well. Trying to
maintain consistency is a nightmare, especially because there are often
cross-cutting patches from JS hackers and Gecko hackers that don't always
get review from an XPConnect peer.

I myself very often touch code all over the engine in a single patch, and
it's impractical to get reviews from all the different owners. Differences
between SpiderMonkey style and Gecko style already make this difficult. So
I don't think we should try to proliferate this pattern.

The JS team has done a great job of documenting very clear style rules in
their style guide. I'm pushing to do more of that for Gecko, so that I have
clear rules to follow and enforce regardless of whether I'm in content/xbl,
dom/base, or image/. Spending time quibbling about these things for every
patch is a huge waste of everyone's time.

If the rest of a given file is clearly written in a different style, we
should try to maintain consistency. Absent that though, we should strive to
have one uniform style guide for Gecko, and make sure that it answers all
of our

Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Jeff Gilbert
FWIW, WebGL and GLContext do this for the same reasons.

-jgilbert

- Original Message -
From: Jeff Walden jwalden+...@mit.edu
To: dev-platform@lists.mozilla.org
Sent: Tuesday, January 7, 2014 2:26:46 PM
Subject: Re: Mozilla style guide issues, from a JS point of view

On 01/07/2014 02:23 PM, Boris Zbarsky wrote:
 One reason I've seen 2 preferred to 4 (apart from keeping line lengths down) 
 is that:
 
   if (somethingHere() 
   somethingElse())
   doSomething();
 
 is less clear about what's condition and what's body the if body is than:
 
   if (somethingHere() 
   somethingElse())
 doSomething();
 
 but this would obviously also be affected by the bracing policy.

Regarding the effect of the bracing policy, SpiderMonkey used to have

if (somethingHere() 
somethingElse()) {
doSomething();
}

which was unreadable.  You simply can't easily skim and see where the body 
starts and where the condition ends, even with braces.  We shoved the opening 
brace to its own line:

if (somethingHere() 
somethingElse())
{
doSomething();
}

It doesn't seem to me that the four-space issue here is helped by braces in 
end-of-line configuration.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please take ask.mozilla.org for a spin

2014-03-16 Thread Jeff Gilbert
Seconded. I want to sign into Mozilla things via my Mozilla accounts. Any 
barrier like this in the way of otherwise-busy engineers is going to hamper 
adoption, and adoption will only solidify if enough people are using it and/or 
there's enough content. If you want Mozillians to use it, you should make it 
easy for Mozillians to use! :)

For what it's worth, I think it's a great idea, and could be a serious resource 
even just later this year. There's a lot of content that needs to be generated 
to get to that point, though.

-Jeff

- Original Message -
From: Mike Hommey m...@glandium.org
To: Taras Glek tg...@mozilla.com
Cc: Nicholas Nethercote n.netherc...@gmail.com, 
dev-platform@lists.mozilla.org
Sent: Friday, March 14, 2014 4:16:46 PM
Subject: Re: Please take ask.mozilla.org for a spin

On Fri, Mar 14, 2014 at 02:33:08PM -0700, Taras Glek wrote:
 Hi,
 I asked 
 https://ask.mozilla.org/question/237/why-do-you-not-use-askmozillaorg/.
 Feedback would be appreciated.

Because it requires yet another account.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Jeff Gilbert
My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.

It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead of 
having a bunch of MOZ_ASSERT(false, Unreachable.) lines.

Consider MOZ_ASSERT_UNREACHABLE being the same as MOZ_OPTIMIZE_FOR_UNREACHABLE 
in non-DEBUG builds.

-Jeff

- Original Message -
From: Steve Fink sf...@mozilla.com
To: Benoit Jacob jacob.benoi...@gmail.com
Cc: Chris Peterson cpeter...@mozilla.com, dev-platform 
dev-platform@lists.mozilla.org
Sent: Friday, March 28, 2014 1:20:39 PM
Subject: Re: MOZ_ASSUME_UNREACHABLE is being misused

On Fri 28 Mar 2014 01:05:34 PM PDT, Benoit Jacob wrote:
 2014-03-28 13:23 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

 On 3/28/14, 12:25 PM, Benoit Jacob wrote:

 Should we give typical code a macro that does what they want and sounds
 like what they want? Really, what typical code wants is a no-operation
 instead of undefined-behavior; now, that is exactly the same as
 MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying,
 and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH
 but only affecting DEBUG builds? What would be a good name for it? Is it
 worth keeping a close analogy with the unreachable-marker macro to steer
 people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even
 just MOZ_UNREACHABLE? So that people couldn't miss it when they look for
 UNREACHABLE macros?


 How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like:

 #define MOZ_ASSERT_UNREACHABLE() \
   MOZ_ASSERT(false, MOZ_ASSERT_UNREACHABLE)

 #define MOZ_CRASH_UNREACHABLE() \
   do {  \
 MOZ_ASSUME_UNREACHABLE_MARKER();\
 MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \
   } while (0)


 MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler feel free to arbitrarily
 miscompile this, and anything from that point on in this branch, as you may
 assume that this code is unreachable. So it doesn't really serve any
 purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER.

MOZ_OPTIMIZE_FOR_UNREACHABLE?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Jeff Gilbert
- Original Message -
 From: Rik Cabanier caban...@gmail.com
 To: Benoit Jacob jacob.benoi...@gmail.com
 Cc: Botond Ballo bba...@mozilla.com, dev-platform 
 dev-platform@lists.mozilla.org, Jet Villegas
 j...@mozilla.com
 Sent: Monday, June 9, 2014 2:08:37 PM
 Subject: Re: C++ standards proposals of potential interest, and upcoming  
 committee meeting
 
 On Mon, Jun 9, 2014 at 1:50 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:
 
  2014-06-09 16:27 GMT-04:00 Jet Villegas j...@mozilla.com:
 
   It seems healthy for the core C++ language to explore new territory here.
   Modern primitives for things like pixels and colors would be a good
  thing,
   I think. Let the compiler vendors compete to boil it down to the CPU/GPU.
 
 
  In the Web world, we have such an API, Canvas 2D, and the compiler
  vendors are the browser vendors. After years of intense competition
  between browser vendors, and very high cost to all browser vendors, nobody
  has figured yet how to make Canvas2D efficiently utilize GPUs.
 
 
 Chrome, IE and Safari all have GPU accelerated backends with good success.
 Deferred rendering is working very well.
We also use Skia's GL backend on some platforms, and D2D on windows. They're 
strictly slower than reimplementing the app in WebGL.
 
 
  There are
  basically two kinds of Canvas2D applications: those for which GPUs have
  been useless so far, and those which have benefited much more from getting
  ported to WebGL, than they did from accelerated Canvas 2D.
 
 
 That is not true. For instance, do you think mozilla's shumway would be
 better and reliable if it was written in WebGL?
It depends on the primitives shumway receives. If Flash uses a 2d-like API like 
skia/cairo/etc., then it is probably not worth the effort to reimplement the 
awful-for-GL parts of those APIs.
If there's some performant subset that can be identified, then yes, doing a 
WebGL path for that would have a higher performance ceiling.

 
 
   There will always be the argument for keeping such things out of Systems
   languages, but that school of thought won't use those features anyway. I
   was taught to not divide by 2 because bit-shifting is how you do fast
   graphics in C/C++. I sure hope the compilers have caught up and such
   trickery is no longer required--Graphics shouldn't be such a black art.
  
   --Jet
  
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-10 Thread Jeff Gilbert

- Original Message -
 On Mon, Jun 9, 2014 at 9:45 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:
  I would like the C++ committee's attention to be drawn to the dangers, for
  committee, to try to make decisions outside of its domain of expertise. I
  see more potential for harm than for good in having the C++ committee join
  the ranks of non graphics specialists thinking they know how to do
  graphics...
 
  If that helps, we can give a pile of evidence for how having generalist
 Web
  circles trying to standardize graphics APIs has repeatedly given
  unnecessarily poor APIs...
 
 It sounds to me like you're opposed to graphics APIs that try to cater to
 non-graphics specialists, period. Do you think that it's outright
 impossible to design a good API with this intent? If you don't, then I'm
 sure it's possible to give more constructive feedback than this, frankly,
 insulting non-specific criticism of essentially the entire idea of
 standardizing anything graphics-related.
Why put this into core C++? Why not leave it to libraries?

What's the core problem trying to be solved here? It's either:
A) We want a 2D graphics API in C++.
or
B) We want to make it easy to do 2D graphic in C++.

The reason it feels like B implies A is that using libraries is a terrible pain.
Fix the cause of the friction for using existing graphics APIs, instead of just
treating the symptoms.

 
 
 On Mon, Jun 9, 2014 at 11:38 PM, Jeff Gilbert jgilb...@mozilla.com wrote:
 
  We also use Skia's GL backend on some platforms, and D2D on windows.
  They're strictly slower than reimplementing the app in WebGL.
 
 
 Which kind of app are you talking about here? All The Apps? Including every
 single game ever made with Canvas2D?
The short answer is 'yes, especially the games'. It has been an uphill battle
to raise awareness of this in games, but the reality is that if you want
performant graphics, don't use Canvas2D. Canvas2D should in fact be 'considered
harmful' to performance, for most games.

GL-based APIs have orders of magnitude better performance in a number of areas,
at the expense of having no API to do things CPUs are good at, like fonts, 
paths, etc.
How to handle these with WebGL? Rasterize them with Canvas2D and upload them to 
WebGL
once, reusing them hundreds of times. This is what SkiaGL tries to do anyways.

There is overhead to using a non-purposed-designed API. Our accelerated 
Canvas2D backends
try really hard, but they will not be faster than someone doing to low-level 
work
themselves. Games (which are largely blitting-bound) will just be faster if you 
can
tell the hardware more explicitly what you want.

Note that this an instance of the 'sufficiently smart compiler/runtime' problem,
and that for C++, I can't imagine there will be a SkiaGL-like runtime to 
translate slow
2D graphics commands into faster GL/D3D dispatches to drivers. We're then left
with a purely-slow, beginner-targeted 2D graphics API. 

I'll note that in OpenGL, we used to have beginner-friendly APIs which allowed 
for
drawing things vertex-by-vertex, and manipulating a primitive shading pipeline.
But it was slow, and considered-harmful. Eventually, we managed to get rid of 
it.
OpenGL is moving towards only offering a powerful base API. If you want an 
easier
API, use a library.

 
 
   That is not true. For instance, do you think mozilla's shumway would be
   better and reliable if it was written in WebGL?
  It depends on the primitives shumway receives. If Flash uses a 2d-like API
  like skia/cairo/etc., then it is probably not worth the effort to
  reimplement the awful-for-GL parts of those APIs.
  If there's some performant subset that can be identified, then yes, doing
  a WebGL path for that would have a higher performance ceiling.
 
 
 We (the Shumway team) looked into implementing all the vector stuff in
 WebGL. We might end up using exactly this approach, but for the general
 case we hope we can continue relying on Canvas2D. (hope, because Flash
 does some things that are hard to impossible to represent in Canvas2D.
 They're not important for this discussion, though: the ability to
 faithfully implement a legacy system isn't a good criterium for the quality
 of, well, anything.)

Ok, but aware that you are almost certainly leaving some performance on the 
table. (This
is probably fine for Shumway though. Ideally people aren't authoring
performance-sensitive Flash applications anymore)

 
There will always be the argument for keeping such things out of
  Systems
 languages, but that school of thought won't use those features
  anyway. I
 was taught to not divide by 2 because bit-shifting is how you do fast
 graphics in C/C++. I sure hope the compilers have caught up and such
 trickery is no longer required--Graphics shouldn't be such a black
  art.
 
 
  I agree with this. The particular approach the C++ committee chose might
 be a bad one, and I do think that domain experts should do the majority

Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-10 Thread Jeff Gilbert






- Original Message -
 On Mon, Jun 9, 2014 at 2:38 PM, Jeff Gilbert jgilb...@mozilla.com wrote:
 
  - Original Message -
   From: Rik Cabanier caban...@gmail.com
   To: Benoit Jacob jacob.benoi...@gmail.com
   Cc: Botond Ballo bba...@mozilla.com, dev-platform 
  dev-platform@lists.mozilla.org, Jet Villegas
   j...@mozilla.com
   Sent: Monday, June 9, 2014 2:08:37 PM
   Subject: Re: C++ standards proposals of potential interest, and upcoming
   committee meeting
  
   On Mon, Jun 9, 2014 at 1:50 PM, Benoit Jacob jacob.benoi...@gmail.com
   wrote:
  
2014-06-09 16:27 GMT-04:00 Jet Villegas j...@mozilla.com:
   
 It seems healthy for the core C++ language to explore new territory
  here.
 Modern primitives for things like pixels and colors would be a good
thing,
 I think. Let the compiler vendors compete to boil it down to the
  CPU/GPU.
   
   
In the Web world, we have such an API, Canvas 2D, and the compiler
vendors are the browser vendors. After years of intense competition
between browser vendors, and very high cost to all browser vendors,
  nobody
has figured yet how to make Canvas2D efficiently utilize GPUs.
  
  
   Chrome, IE and Safari all have GPU accelerated backends with good
  success.
   Deferred rendering is working very well.
  We also use Skia's GL backend on some platforms, and D2D on windows.
  They're strictly slower than reimplementing the app in WebGL.
 
 
 Sure, if you're just animating bitmaps with optional filters/compositing
 WebGL is faster. Drawing paths and text on the GPU requires complex shaders
 or tesselation and that require a lot of finessing.

If it's not a good candidate for GPU acceleration, just do it on the CPU. 
Games and other performance-sensitive apps generally benefit from at least
hardware-accelerated blitting and blending.

Even the browser, which is full of text, paths, gradients, and more, we find
it advantageous to eventually composite with accelerated 3D APIs (given the
need to scroll and zoom), even if we rasterize many of the above on the CPU.

 
 
There are
basically two kinds of Canvas2D applications: those for which GPUs have
been useless so far, and those which have benefited much more from
  getting
ported to WebGL, than they did from accelerated Canvas 2D.
  
  
   That is not true. For instance, do you think mozilla's shumway would be
   better and reliable if it was written in WebGL?
  It depends on the primitives shumway receives. If Flash uses a 2d-like API
  like skia/cairo/etc., then it is probably not worth the effort to
  reimplement the awful-for-GL parts of those APIs.
  If there's some performant subset that can be identified, then yes, doing
  a WebGL path for that would have a higher performance ceiling.
 
 
 Likely a hybrid approach where vectors are rendered by canvas 2D into a
 bitmap and those bitmaps are then animated by WebGL.
 A bitmap would be generated when a movieclip has:
 - a filter
 - a blend mode
 - cache as bitmap

Indeed, for WebGL apps, the general idea for text (for example) is to rasterize
to a Canvas2D canvas, and then use that canvas as a texture in WebGL. (write 
once,
draw often) FWIW, this is basically the web-API version of what we do in Gecko 
for
page rasterization.


I should address the idea that we could use cpp2d to do non-GPU rasterization:
I guess, but why? It gives us a default, but is that needed? In Gecko, we 
certainly
would take very sparing use of it, if at all. (We already have Cairo, after all)

If C++ wants a default, just package Cairo (or something else) as a library 
alongside
Clang/LLVM or GCC. 

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-19 Thread Jeff Gilbert
I would actually say that debug tests are more important for continuous 
integration than opt tests. At least in code I deal with, we have a ton of 
asserts to guarantee behavior, and we really want test coverage with these via 
CI. If a test passes on debug, it should almost certainly pass on opt, just 
faster. The opposite is not true.

They take a long time and then break is part of what I believe caused us to 
not bother with debug testing on much of Android and B2G, which we still 
haven't completely fixed. It should be unacceptable to ship without CI on debug 
tests, but here we are anyways. (This is finally nearly fixed, though there is 
still some work to do)

I'm not saying running debug tests less often is on the same scale of bad, but 
I would like to express my concerns about heading in that direction.

-Jeff

- Original Message -
From: Jonathan Griffin jgrif...@mozilla.com
To: dev-platform@lists.mozilla.org
Sent: Tuesday, August 19, 2014 12:22:21 PM
Subject: Experiment with running debug tests less often on mozilla-inbound  
the week of August 25

Our pools of test slaves are often at or over capacity, and this has the 
effect of increasing job coalescing and test wait times.  This, in turn, 
can lead to longer tree closures caused by test bustage, and can cause 
try runs to be very slow to complete.

One of the easiest ways to mitigate this is to run tests less often.

To assess the impact of doing this, we will be performing an experiment 
the week of August 25, in which we will run debug tests on 
mozilla-inbound on most desktop platforms every other run, instead of 
every run as we do now.  Debug tests on linux64 will continue to run 
every time.  Non-desktop platforms and trees other than mozilla-inbound 
will not be affected.

This approach is based on the premise that the number of debug-only 
platform-specific failures on desktop is low enough to be manageable, 
and that the extra burden this imposes on the sheriffs will be small 
enough compared to the improvement in test slave metrics to justify the 
cost.

While this experiment is in progress, we will be monitoring job 
coalescing and test wait times, as well as impacts on sheriffs and 
developers.  If the experiment causes sheriffs to be unable to perform 
their job effectively, it can be terminated prematurely.

We intend to use the data we collect during the experiment to inform 
decisions about additional tooling we need to make this or a similar 
plan permanent at some point in the future, as well as validating the 
premise on which this experiment is based.

After the conclusion of this experiment, a follow-up post will be made 
which will discuss our findings.  If you have any concerns, feel free to 
reach out to me.

Jonathan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-20 Thread Jeff Gilbert
I was just going to ask about this. I glanced through the mozconfigs in the 
tree for at least Linux debug, but it looks like it only has --enable-debug, 
not even -O1. Maybe it's buried somewhere in there, but I didn't find it with a 
quick look.

I took a look at the build log for WinXP debug, and --enable-opt is only 
present on the configure line for nspr, whereas --enable-debug is in a number 
of other places.

Can we get confirmation for whether debug builds are (partially?) optimized? If 
not, we should do that. (Unless I'm missing a reason not to, especially if we 
only care about pass/fail, and not crash stacks/debugability)

-Jeff

- Original Message -
From: Kyle Huey m...@kylehuey.com
To: Joshua Cranmer  pidgeo...@gmail.com
Cc: dev-platform dev-platform@lists.mozilla.org
Sent: Tuesday, August 19, 2014 3:56:27 PM
Subject: Re: Experiment with running debug tests less often on mozilla-inbound  
the week of August 25

I'm pretty sure the debug builds on our CI infrastructure are built
with optimization.

- Kyle

On Tue, Aug 19, 2014 at 3:42 PM, Joshua Cranmer  pidgeo...@gmail.com wrote:
 On 8/19/2014 5:25 PM, Ehsan Akhgari wrote:

 Yep, the debug tests indeed take more time, mostly because they run more
 checks.


 Actually, the bigger cause in the slowdown is probably that debug tests
 don't have any optimizations, not more checks. An atomic increment on a
 debug build invokes something like a hundred instructions (including several
 call instructions) whereas the equivalent operation on an opt build is just
 one.

 --
 Joshua Cranmer
 Thunderbird and DXR developer
 Source code archæologist


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-20 Thread Jeff Gilbert
Graphics in particular is plagued by non-cross-platform code. Debug coverage on 
Linux gives us no practical coverage for our windows, mac, android, or b2g 
code. Maybe this is better solved with reviving the Graphics branch, however.

-Jeff

- Original Message -
From: Chris AtLee cat...@mozilla.com
To: Ehsan Akhgari ehsan.akhg...@gmail.com
Cc: Jonathan Griffin jgrif...@mozilla.com, Jeff Gilbert 
jgilb...@mozilla.com, dev-platform@lists.mozilla.org
Sent: Wednesday, August 20, 2014 9:02:14 AM
Subject: Re: Experiment with running debug tests less often on mozilla-inbound 
the week of August 25

On 18:25, Tue, 19 Aug, Ehsan Akhgari wrote:
On 2014-08-19, 5:49 PM, Jonathan Griffin wrote:
On 8/19/2014 2:41 PM, Ehsan Akhgari wrote:
On 2014-08-19, 3:57 PM, Jeff Gilbert wrote:
I would actually say that debug tests are more important for
continuous integration than opt tests. At least in code I deal with,
we have a ton of asserts to guarantee behavior, and we really want
test coverage with these via CI. If a test passes on debug, it should
almost certainly pass on opt, just faster. The opposite is not true.

They take a long time and then break is part of what I believe
caused us to not bother with debug testing on much of Android and
B2G, which we still haven't completely fixed. It should be
unacceptable to ship without CI on debug tests, but here we are
anyways. (This is finally nearly fixed, though there is still some
work to do)

I'm not saying running debug tests less often is on the same scale of
bad, but I would like to express my concerns about heading in that
direction.

I second this.  I'm curious to know why you picked debug tests for
this experiment.  Would it not make more sense to run opt tests on
desktop platforms on every other run?

Just based on the fact that they take longer and thus running them less
frequently would have a larger impact.  If there's a broad consensus
that debug runs are more valuable, we could switch to running opt tests
less frequently instead.

Yep, the debug tests indeed take more time, mostly because they run 
more checks.  :-)  The checks in opt builds are not exactly a subset 
of the ones in debug builds, but they are close.  Based on that, I 
think running opt tests on every other push is a more conservative 
one, and I support it more.  That being said, for this one week 
limited trial, given that the sheriffs will help backfill the skipped 
tests, I don't care very strongly about this, as long as it doesn't 
set the precedence that we can ignore debug tests!

I'd like to highlight that we're still planning on running debug 
linux64 tests for every build. This is based on the assumption that 
debug-specific failures are generally cross-platform failures as well.

Does this help alleviate some concern? Or is that assumption just plain 
wrong?

Cheers,
Chris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-20 Thread Jeff Gilbert
If running debug tests on a single platform is generally sufficient for 
non-graphics bugs, it might be useful to have the Graphics branch run debug 
tests on all platforms, for use with graphics checkins. (while running a 
decreased number of debug tests on the main branches) It's still possible for 
non-graphics code to expose platform-specific bugs, but it's less likely, so 
maybe larger regression windows are acceptable for platform-specific bugs in 
non-graphics code.

-Jeff

- Original Message -
From: Ehsan Akhgari ehsan.akhg...@gmail.com
To: Jeff Gilbert jgilb...@mozilla.com, Chris AtLee cat...@mozilla.com
Cc: Jonathan Griffin jgrif...@mozilla.com, dev-platform@lists.mozilla.org
Sent: Wednesday, August 20, 2014 3:16:31 PM
Subject: Re: Experiment with running debug tests less often on mozilla-inbound 
the week of August 25

On 2014-08-20, 5:46 PM, Jeff Gilbert wrote:
 Graphics in particular is plagued by non-cross-platform code. Debug coverage 
 on Linux gives us no practical coverage for our windows, mac, android, or b2g 
 code. Maybe this is better solved with reviving the Graphics branch, however.

Having more branches doesn't necessarily help with consuming less infra 
resources, unless if the builds will be run with a lower frequency or 
something.

Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mochitests from a copy of the objdir?

2014-08-20 Thread Jeff Gilbert
 From: Gregory Szorc g...@mozilla.com
 To: James Graham ja...@hoppipolla.co.uk, dev-platform@lists.mozilla.org
 Sent: Wednesday, August 20, 2014 11:24:00 AM
 Subject: Re: Running mochitests from a copy of the objdir?
 
 It sounds like the copy build to remote machine so we can run tests
 there is somewhat common and needs to be better supported by the tools.

+1
...+10

 
 I think it makes sense to leverage test archives and build packages for
 this. Since mozharness already supports running things from there, I
 guess the missing piece is some glue code that invokes mozharness with a
 sane set of default options. That missing piece sounds a lot like what
 mach does. We talked about bundling mach in the tests archive to
 facilitate things like this. Perhaps we should move forward with that?
 Or is there a better way?
 
 The problem is at the unholy intersection of build, test, and automation
 responsibilities. Who's the owner? A-Team?
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-20 Thread Jeff Gilbert
 From: Ehsan Akhgari ehsan.akhg...@gmail.com
 To: Jeff Gilbert jgilb...@mozilla.com
 Cc: Chris AtLee cat...@mozilla.com, Jonathan Griffin 
 jgrif...@mozilla.com, dev-platform@lists.mozilla.org
 Sent: Wednesday, August 20, 2014 4:00:15 PM
 Subject: Re: Experiment with running debug tests less often on 
 mozilla-inbound the week of August 25
 
 On 2014-08-20, 6:29 PM, Jeff Gilbert wrote:
  If running debug tests on a single platform is generally sufficient for
  non-graphics bugs,
 
 It is not.  That is the point I was trying to make.  :-)
 
   it might be useful to have the Graphics branch run debug tests on all
 platforms, for use with graphics checkins. (while running a decreased
 number of debug tests on the main branches) It's still possible for
 non-graphics code to expose platform-specific bugs, but it's less
 likely, so maybe larger regression windows are acceptable for
 platform-specific bugs in non-graphics code.
 
 I don't really understand how graphics is special here.  We do have
 platform specific code outside of graphics as well, so we don't need to
 solve this problem for gfx specifically.
 

Maybe Graphics isn't that special, but this stuff hits really close to home for 
us.

I have been asked in the past if we really need to run WebGL tests on Android, 
if they have coverage on Desktop platforms.
And then again later, why B2G if we have Android.

There seems to be enough belief in test-once-run-everywhere that I feel the 
need to *firmly* establish that this is not acceptable, at least for the code I 
work with.
I'm happy I'm not alone in this.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to Visual Studio 2013

2014-08-27 Thread Jeff Gilbert
 From: Gian-Carlo Pascutto gpascu...@mozilla.com
 To: dev-platform@lists.mozilla.org
 Sent: Wednesday, August 27, 2014 11:23:34 AM
 Subject: Re: Switching to Visual Studio 2013
 
 On 27/08/2014 17:30, Ted Mielczarek wrote:
 
  I think our commitment to the Express versions should be ensure it
  works with whatever Microsoft makes it easy to get a hold of. We should
  fix bugs that impact developers' ability to build with the versions of
  the toolchain we support, but I haven't seen that be a huge problem in
  practice.
  (I'm not sure why you particularly care about SP2 vs. SP3
  here, we should just make sure we build with the latest version.)
 
 There's compelling advantages to be compiling with the exact same
 toolchain as on the buildbots: you won't end up writing code that the
 buildbot barfs on because it's on a newer/older toolchain, and if you
 have a failure on try that might be compiler related, the odds are
 bigger that you will be able to reproduce the exact issue locally if the
 compiler matches to begin with.
 
 These issues aren't common, so indeed they're not a huge problem in
 practice, but they're still nice-to-haves.

I don't find this particularly compelling, since invariably the buildbots that 
barf on me is B2G or OSX.
I can't remember a time when my Windows build worked on Try but not on my 
Windows box, or vice-versa.
It should be possible to acquire the same toolchain, but it's not that 
important for normal development.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript (strict) Warning from FF mochitest-plain test.

2014-10-02 Thread Jeff Gilbert
We should most likely assert when we have strict violations in chrome JS.
We should *definitely* assert when we have JS errors in chrome JS.
I don't see how there should be any excuse for either of these cases.

We certainly can't assert on non-browser JS errors, but we should keep our own 
scripts clean.

- Original Message -
From: chiaki ISHIKAWA ishik...@yk.rim.or.jp
To: dev-platform@lists.mozilla.org
Sent: Thursday, October 2, 2014 11:59:04 AM
Subject: JavaScript (strict) Warning from FF mochitest-plain test.

Hi,

I was looking at a large number of JavaScript (strict) warnings, and
errors from the recording of |make mozmill| test run of locall-built
DEBUG BUILD of TB last several days after a refresh of C-C source tree.

The number of such has increased very much both
 - due to the seemingly stricter checks done by JS infrastructure, and
 - new carelessly written code,
I think.

But the sheer number of them overwhelmed me and I began wondering is FF
faring better?

I don't think so.

Here are the list of such errors summarized from the
log of |./mach mochitest-plain| of locally-built DEBUG BUILD of FF (from
M-C portion of C-C source tree).

The # at the beginning of each line is the frequency of the occurrences.
[I was looking at getter only message which is printed for more than
3K times during |make mozmill| test for TB for Services when checking
the health of logs from FF tests came to my mind.)

I think we need a sort of sheriff to look at
these warnings and file bugzilla entries accordingly from time to time:
I say this because it seems not many people look at debug build run of
test suite often (well, at least on TB side. I don't know if TB debug
run is created regularly.)

Excerpt from summary of the log : ./mach mochitest-plain

 
 JavaScript strict warning
 jquery.js and jquery-ui.js are ignored.
 

206 JavaScript error:
file:///home/ishikawa/ff-objdir-tb3/dist/bin/components/nsHandlerService.js,
line 891: NS_ERROR_FAILURE: Component returned failure code: 0x80004005
(NS_ERROR_FAILURE) [nsIProperties.get]
 99 JavaScript strict warning: , line 0: TypeError: setting
getter-only property PrivateBrowsingUtils
 66 JavaScript strict warning: , line 0: TypeError: setting
getter-only property Services
 33 JavaScript strict warning: , line 0: TypeError: setting
getter-only property TelemetryStopwatch
 33 JavaScript strict warning: , line 0: TypeError: setting
getter-only property Task
 33 JavaScript strict warning: , line 0: TypeError: setting
getter-only property PlacesUtils
 33 JavaScript strict warning: , line 0: TypeError: setting
getter-only property NewTabUtils
 11 JavaScript strict warning:
chrome://specialpowers/content/specialpowersAPI.js, line 214:
ReferenceError: reference to undefined property
x.SpecialPowers_wrappedObject
  4 JavaScript error: resource://gre/modules/XPCOMUtils.jsm, line
110: SyntaxError: missing ; before statement
  4 JavaScript error:
http://example.com/tests/content/base/test/bug696301-script-1.js, line
3: ReferenceError: c is not defined
  4 JavaScript error: , line 0: uncaught exception: 2147500033
  3 JavaScript strict warning:
resource://specialpowers/MockFilePicker.jsm, line 92: SyntaxError: octal
literals and octal escape sequences are deprecated
  3 JavaScript strict warning:
chrome://specialpowers/content/specialpowersAPI.js, line 662:
ReferenceError: reference to undefined property wrapPrivileged(...).classes
  3 JavaScript error: , line 0: uncaught exception: Permission
denied to add http://mochi.test:/%sas a protocol handler
  2 JavaScript strict warning:
chrome://specialpowers/content/specialpowersAPI.js, line 665:
ReferenceError: reference to undefined property wrapPrivileged(...).results
  2 JavaScript error: resource://gre/modules/vtt.jsm, line 1261:
TypeError: self.window.VTTRegion is not a constructor
  2 JavaScript error: , line 0: uncaught exception: Permission
denied to add http://remotehost:/%s as a content or protocol handler
  2 JavaScript error: , line 0: uncaught exception: Permission
denied to add ftp://mochi.test:/%s as a content or protocol handler
  2 JavaScript error: , line 0: uncaught exception: Permission
denied to add foo://mochi.test:/%s as a content or protocol handler
  2 JavaScript error: , line 0: uncaught exception: 2152923148
  1 JavaScript warning: resource://gre/modules/Preferences.jsm, line
381: mutating the [[Prototype]] of an object will cause your code to run
very slowly; instead create the object with the correct initial
[[Prototype]] value using Object.create
  1 JavaScript strict warning: resource://gre/modules/Webapps.jsm,
line 539: ReferenceError: reference to undefined property
Components.classes['@mozilla.org/app-migrator;1']
  1 JavaScript strict warning: resource://gre/modules/Webapps.jsm,
line 389: ReferenceError: 

Re: JavaScript (strict) Warning from FF mochitest-plain test.

2014-10-02 Thread Jeff Gilbert
Indeed, which is why I recommended assert, not warn.

Warnings are basically useless from a CI standpoint.
If you want to guarantee it, don't warn, assert.

-Jeff

- Original Message -
From: Martin Thomson m...@mozilla.com
To: dev-platform@lists.mozilla.org
Sent: Thursday, October 2, 2014 1:05:44 PM
Subject: Re: JavaScript (strict) Warning from FF mochitest-plain test.

On 02/10/14 12:52, Chris Peterson wrote:
 SpiderMonkey already forces ES5 strict mode and extra warnings mode
 for all self-hosted JS: bug 784295.

The problem is that this only generates an exception, not a test 
failure, so if the error is in code that isn't depended upon (also a 
problem), then it just disappears.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement and ship: Unprivilaged WEBGL_debug_renderer_info

2015-06-15 Thread Jeff Gilbert
Summary:
The WEBGL_debug_renderer_info extension allows for querying which driver
(and commonly GPU) a WebGL context is running on. Specifically, it allows
querying the RENDERER and VENDOR strings of the underlying OpenGL driver.

By default, RENDERER and VENDOR queries in WebGL yield safe but useless
values. (For example, Gecko returns Mozilla/Mozilla for
RENDERER/VENDOR) Queries to UNMASKED_RENDERER_WEBGL and
UNMASKED_VENDOR_WEBGL yield the RENDERER and VENDOR string of the
underlying graphics driver. These values are combined to form the WebGL
Renderer field in about:support. On my system, these are:
* UNMASKED_RENDERER_WEBGL: ANGLE (NVIDIA GeForce GT 750M Direct3D11 vs_5_0
ps_5_0)
* UNMASKED_VENDOR_WEBGL: Google Inc. [1]

Bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1171228

Link To Standard:
https://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/

Do other browser engines implement this:
Chrome and IE implement this; Safari does not.

Platform Coverage: All platforms.

Current Target Release: Firefox 41

Related Preferences:
* webgl.disable-debug-renderer-info (default: false): Disable this
extension for unprivileged content.
* webgl.renderer-string-override (default: ): Overrides
UNMASKED_RENDERER_WEBGL query result when non-empty.
* webgl.vendor-string-override (default: ): Overrides
UNMASKED_VENDOR_WEBGL query result when non-empty.

Security and Privacy Concerns:
* Traditional user-agent sniffing concerns. (Known antipattern)
* This info includes what GPU is being used, which may contribute to
marketing profiles.
* This info adds easily-accessible bits of entropy that improve
fingerprinting, reducing privacy. (Panopticlick and others have
demonstrated that this is already very effective)

Web Developer Use-Cases:
* Sites can more easily and immediately identify and address concerns
caused by specific hardware or drivers. Currently, apps must
unconditionally workaround an issue until we can ship a fix via browser
updates.This can mean performance degradation for unaffected machines for,
sometimes for weeks.
* Sites can collate and cross-reference drivers and hardware when tracking
issues both user-reported and auto-detected, which both helps sites
identify problematic hardware, and helps browsers fix these issues in turn.
* This allows sites to offer better estimates of performance, and offer
reasonable defaults for quality settings.

[1] On Windows, we use ANGLE as an intermediary driver on top of D3D, hence
the VENDOR string being Google, Inc. and not NVIDIA here.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-06 Thread Jeff Gilbert
I propose that we stop recommending the universal use of an 'a' prefix for
arguments to functions in C and C++. If the prefix helps with
disambiguation, that's fine. However, use of this prefix should not be
prescribed in general.

`aFoo` does not provide any additional safety that I know of.[1] As a
superfluous prefix, it adds visual noise, reducing immediate readability of
all function declarations and subsequent usage of the variables within the
function definition.

Notable works or style guides [2] which do not recommend `aFoo`: [3]
* Google
* Linux Kernel
* Bjarne Stroustrup
* GCC
* LLVM
* Java Style (Java, non-C)
* PEP 0008 (Python, non-C)
* FreeBSD
* Unreal Engine
* Unity3D (largely C#)
* Spidermonkey
* Daala
* RR
* Rust
* Folly (from Facebook)
* C++ STL entrypoints
* IDL for web specs on W3C and WhatWG
* etc.

Notable works or style guides which *do* recommend `aFoo`:
* Mozilla (except for IDL, Java, and Python)
* ?

3rd-party projects in our tree which do not use `aFoo`:
* Cairo
* Skia
* ANGLE
* HarfBuzz
* ICU
* Chromium IPC
* everything under modules/ that isn't an nsFoo.c/cpp/h
* etc.?

3rd-party projects in our tree which *do* recommend `aFoo`:
* ?

As far as I can tell, the entire industry disagrees with us (as well as a
number of our own projects), which means we should have a good reason or
two for making our choice. No such reason is detailed in the style guide.

I propose we strike the `aFoo` recommendation from the Mozilla style guide.

-

[1]: Maybe it prevents accidental shadowing? No: Either this isn't allowed
by spec, or at least MSVC 2013 errors when compiling this.

[2]: I do not mean this as an endorsement of the listed works and guides,
but rather as illustration on how unusual our choice is.

[3]: I created an Etherpad into which people are welcome to gather other
works, projects, or style guides that I missed:
https://etherpad.mozilla.org/6FcHs9mJYQ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 4:54 AM, Honza Bambas hbam...@mozilla.com wrote:

 I'm strongly against removing the prefix.  I got used to this and it has
 its meaning all the time I inspect code (even my own) and doing reviews.
 Recognizing a variable is an argument is very very useful.  It's important
 to have it and it's good we enforce it!

 -hb-


Please expand on this. I review a lot of code, and can't remember the last
time knowing a var was an arg was useful.
The only exception is 'out-vars', which I believe should be handled as a
different case. (WebGL code generally uses the out_ prefix)
`aFoo` does not discriminate, so it's impossible to tell if assignment to
`aFoo` is local or not without more context.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 6:03 AM, Kartikaya Gupta kgu...@mozilla.com wrote:

 I'd be interested to know: of those people who are in favour of
 removing the prefix, how many regularly have to deal with functions
 that are longer than two pages (a page is however much code you can
 see at a time in your coding environment)? I'd be happy to support
 removing the prefix if people also commit to splitting any giant
 functions they touch as part of the prefix removal.


I work with a number of these, but after a page or two, why is it at all
relevant which vars were args? For information flow? Should we mark locals
that purely derive from args as `aFoo` as well? Long functions (which have
poor readability anyway) generally have so much going on that the trivia of
which vars are args does not seem very useful..

I do not see how `aFoo` helps here, so please expand on this.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switch to Google C++ Style Wholesale (was Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++)

2015-07-15 Thread Jeff Gilbert
On Tue, Jul 14, 2015 at 9:17 PM, Nicholas Nethercote n.netherc...@gmail.com
 wrote:

 On Tue, Jul 14, 2015 at 8:06 PM, Bobby Holley bobbyhol...@gmail.com
 wrote:
  I'm not wild about this idea.

 It's such a boil-the-ocean solution I honestly thought bsmedberg was
 joking at first...


Well consistency is a major concern, so as long as the oceans are well and
truly boiled...

I'm certainly no fan of snake_case, but the literature does say it's more
readable, and their data is better than my anecdotes.

This Modest Proposal has my vote.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-16 Thread Jeff Gilbert
On Thu, Jul 16, 2015 at 6:50 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
wrote:

 On 2015-07-15 6:47 PM, Jeff Gilbert wrote:

 Arg warts improve backtracking for debugging
 Regardless of the validity of  Arg warts help illustrate information
 flow, the use-case of backtracking to 'origin' of aFoo is also
 unconvincing, since you'd only need to go back once more than previously
 before moving once back 'up'. Either way sounds eminently automateable.
 - No practical change, thus irrelevant.


 I don't understand what you're saying here at all.  It seems that you're
 saying that this use case is not interesting to you, and that's OK, but
 that doesn't make it irrelevant.  :-)


Not at all! Rather, that either way it would be workable. The tiny extra
bit of effort it would take without aFoo is completely solved by
automation, and if this is a common enough use-case for this tiny extra bit
of effort to matter, it should *already* have been automated. Thus I don't
have a ton of sympathy for making a readily-automateable process require
differentially more effort without automation.



  It's hard to implement
 This is irrelevant for new code, and bulk-change is feasible once we have
 -Wshadow. (which isn't too hard. We shouldn't have shadowing in our code
 anyway, so we want -Wshadow regardless of this discussion) Besides, a
 number of our internal projects and modules already don't have it, so it's
 free for them.
 - Non-trivial, but not too hard to implement.


 The isn't too hard part of the above is factually incorrect.  See bug
 563195 for why.


I have read it, and it doesn't seem insurmountable. Just because it's not
free doesn't make it 'too hard'. I am naturally willing to work on this
myself.



  So, when you say I like aFoo because it lets me know which vars are
 args,
 is there any un-touched-on aspect of the information this var is an
 argument that is useful to know? How does every other project survive
 without it?


 Given Benjamin's response, there is really no point in arguing over this
 any more, but just as a thought exercise: is there anything that would
 convince you that this is useful for some people?


There's only no point here if Benjamin's proposal carries! Besides, this
discussion is certainly a subset of the boil-the-ocean discussion. If aFoo
is truly valuable, then removing it via Google Style would hurt us just as
removing it a la carte would.

What could convince me? Reasoned and consistent arguments, perhaps examples
or anecdotes where aFoo really came in handy that wouldn't have been
covered under general good code style. Maybe someone might have done a
study on this style. Maybe there are other large projects which use this
style. Tautologically: Convincing arguments!


 I would expect having those people tell you that it's useful for them
 should be a good enough reason to assume that it is!


Especially as part of standards body, I'm perfectly fine with dismissing
requests for things people claim to want, but can't provide sufficient
reasoning for, [1] or likewise being unable to argue my own case
successfully. I'm well-acquainted with being convinced to change my
position through discussion and debate, and also with convincing others.
(or not!)

This is why we have these discussions! Claims are not substantiated simply
because people claim them to be true. We need to lay the reasons on the
table and then weigh them, as without reasons, choices are arbitrary.
Sometimes we even need to do it more than once, as time changes things.
(even if people hate doing this!) We may find no need for change, or we may
instead find we can change for the better.

[1]: To be clear, sufficient reasoning can definitely include culture,
harsh realities, pragmatism, and other such things. They are not all
granted the same gravity, but they all contribute.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-13 Thread Jeff Gilbert
On Sun, Jul 12, 2015 at 10:45 PM, Nicholas Nethercote 
n.netherc...@gmail.com wrote:

 On Thu, Jul 9, 2015 at 7:01 PM, Jeff Gilbert jgilb...@mozilla.com wrote:
 
  Arguments have the same lifetimes as locals, and the only exceptions to
  this apply to both args and locals. (references and pointers)

 Maybe I've misunderstood what you're saying here, but locals are
 frequently block-scoped which gives them a different lifetime to args.


Right, args have lifetimes of function-block-scope locals.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-13 Thread Jeff Gilbert
On Sun, Jul 12, 2015 at 6:45 PM, Thomas Zimmermann tzimmerm...@mozilla.com
wrote:

 Am 08.07.2015 um 16:36 schrieb smaug:
  Do you actually have any data how many % of Gecko devs would prefer
  not using aFoo?

 I strongly prefer 'aFoo' over 'foo' for the extra context that it gives
 to the variable. If we want to change anything we should rather
 introduce a separate prefix for output parameters.


Which part of this extra context is useful?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 1:03 PM, smaug opet...@mozilla.com wrote:

 As someone who spends more than 50% of working time doing reviews I'm
 strongly against this proposal.
 aFoo helps with readability - reader knows immediately when the code is
 dealing with arguments.


When and why is this useful to know?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 1:20 PM, smaug opet...@mozilla.com wrote:

 readability / easier to follow the dataflow are rather compelling reasons.


It hurts readability for me and many others.
I don't see how it revolutionizes following dataflow, since we have locals
that are pure functions of args, but yet are not marked aFoo.

Outvars are a different beast, and in at least WebGL code, are marked as
such. (`out_` prefix)

`aFoo` is not a good solution for outvars. If outvars are the main reason
for `aFoo`, we should stop using `aFoo` for all arguments, and only mark
outvars.


I selfishly try to get the time I spend on reviewing a patch shorter, and
 aFoo helps with that.


It hinders my patch reviewing. I've been speaking to those around me, and
they do not see any value in differentiating args from locals. (args are
just locals, anyway)


Though even more important is consistent coding style everywhere (per
 programming language).


Why don't we come into consistency with the industry at large, and also the
number of internal Mozilla projects which choose not to use `aFoo`.

I have found no other style guide that recommends `aFoo`. Why are we
different? Why do we accept reduced readability for all external
contributors? Why do so many other Mozilla projects not use this alleged
readability aid?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
Outvars are good candidates for having markings in the variable name.
`aFoo` for all arguments is a poor solution for this, though.

On Tue, Jul 7, 2015 at 1:22 PM, smaug opet...@mozilla.com wrote:

 On 07/07/2015 11:18 PM, Jeff Gilbert wrote:

 On Tue, Jul 7, 2015 at 1:03 PM, smaug opet...@mozilla.com wrote:

  As someone who spends more than 50% of working time doing reviews I'm
 strongly against this proposal.
 aFoo helps with readability - reader knows immediately when the code is
 dealing with arguments.


 When and why is this useful to know?



 Most common case in Gecko is to know that one is assigning value to
 outparam.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-09 Thread Jeff Gilbert
On Wed, Jul 8, 2015 at 7:48 AM, smaug opet...@mozilla.com wrote:

 Another example where aFoo tends to be rather useful is lifetime
 management.
 If I see aFoo being used somewhere in a method after some unsafe method
 call
 (layout flush, any script callback handling, event dispatch, observer
 service notification etc.),
 I know that I need to check that the _caller_ follows COM-rules and keeps
 aFoo object alive during the method call.
 With non-aFoo variables I know the lifetime is controlled within the
 method.


Arguments have the same lifetimes as locals, and the only exceptions to
this apply to both args and locals. (references and pointers)
There are the same problems with locals as there are with args, as args are
just locals passed into the function.

FWIW, I hacked a python script that does s/aF[Oo]o/foo/, and MSVC does has
a code analysis mode that supports an error-on-warn for shadowed variable
names.

I can take a look at how bad the shadowing is in the tree.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 3:59 PM, Eric Rahm er...@mozilla.com wrote:

 I'm not a huge fan of the 'aFoo' style, but I am a huge fan of
 consistency. So if we want to change the style guide we should update our
 codebase, and I don't think we can reasonably do that automatically without
 introducing shadowing issues.


MSVC 2013 (which I believe is our main windows compiler right now) will
error during compilation if such a shadowing issue arises. Thus, if the
code compiles there, `aFoo`-`foo` is safe. I would be very surprised if
GCC or Clang didn't have an equivalent option.


 Additionally I don't spend 50% of my time reviewing, so I'd say my opinion
 here (meh to aFoo) is less important. It's not an undue burden for me to
 include an aPrefix and if we have static analysis to check for it that
 would make it even less of an issue.


It can be a burden on the hundreds of devs who have to read and understand
the code in order to write more code. With the exception of a couple
people, review is not the bottleneck. The opinions of a few over-harried
reviewers should not hold undue sway over the many many devs writing code.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 6:06 PM, Karl Tomlinson mozn...@karlt.net wrote:

 Jeff Gilbert writes:

  I work with a number of these, but after a page or two, why is it at all
  relevant which vars were args? For information flow? Should we mark
 locals
  that purely derive from args as `aFoo` as well? Long functions (which
 have
  poor readability anyway) generally have so much going on that the trivia
 of
  which vars are args does not seem very useful..
 
  I do not see how `aFoo` helps here, so please expand on this.

 A simple variable name, such as font for example, may identify
 any of a number of fonts.  Such simple names, without any
 qualifier in the name, are often used in loops, for example,
 because it is the most important font in the immediate context.

 However a simple variable may also be used in a parameter list
 because when looking at the parameter list it is obvious which
 font is relevant in the interface.

 That means that if font is seen in the body of a function, the
 first question that arises is which font?


If it's named well, there should be no question which it refers to, with or
without argument decoration.


 If the variable is called aFont then we know which font because
 we know what function we are in.


Use aFont if it helps, just as we use iFoo and fFoo sometimes when doing
conversions.
Don't require it though.
In particular, `newSize` is better than `aSize` when also dealing with
mSize. Inferring the meaning of a variable from its status as an argument
is a crutch for poor variable naming. (and adds mental load)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Mon, Jul 6, 2015 at 8:26 PM, Mike Hommey m...@glandium.org wrote:

 The existence of aFoo goes along with the existence of mFoo, sFoo, kFoo,
 and others I might have forgotten. Not that I particularly care about
 aFoo, but why strike this one and not the others?[1] It's not like they
 have widespread use in the industry either. That is, the same reasoning
 could be applied to them, yet, you're stopping at aFoo. Why?


mFoo and sFoo have very different scopes compared to locals, so calling
them out is useful.
kFoo makes it clear that the variable is constant, and has connotations
regarding it being a hardcoded limit or value.
Note that commonly kFoo is actually a static constant. Immutable (`const`)
locals are often not kPrefixed.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Gilbert
On Tue, Jul 7, 2015 at 5:41 PM, Karl Tomlinson mozn...@karlt.net wrote:

 Jeff Gilbert writes:

  It can be a burden on the hundreds of devs who have to read and
 understand
  the code in order to write more code.

 Some people find the prefix helps readability, because it makes
 extra information immediately available in the code being
 examined, while you are indicating that this is a significant
 burden on readability.

 Can you explain why the extra letter is a significant burden?


Because extra noise is being forced into variable names for minimal
benefit. Every declaration is a sea of extra 'a's. Refactoring code means
doing a lot of s/aFoo/foo/ and vice-versa. Reading each arg name requires
first passing over 'a' before getting to anything relevant. Often this
means that short function bodies can have every fifth or sixth letter being
'a'.

Why does it matter that they're arguments? Arguments are just locals that
are passed into functions. (outparams are a different beast already
addressed elsewhere)

I would like to reiterate that *we are unusual* in our present preference
(not to mention requirement) for this in our style guideline. I'm not
proposing a change to something bizarre. I'm proposing the removal of an
extremely unusual style requirement.

If the 'a' prefix is a burden then the 'm' prefix must be also,
 and so we should be using this-member instead of mMember.


No, as this provides huge benefit by indicating when we are referencing a
member variable, which differ greatly from local variables in scope.
Arguments have the same scope as locals. There is benefit in mFoo,
particularly compared to requiring this-foo, which I don't think we even
have compiler or linter support for, and would clearly be superfluous in
terms of extra characters.


  The opinions of a few over-harried
  reviewers should not hold undue sway over the many many devs writing
 code.

 unless people want code to be reviewed.


`aFoo` is never going to make or break our ability to do code review. Low
code-review bandwidth is all but completely orthogonal to this discussion.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jeff Gilbert
WEBGL_dynamic_texture is due for a pruning refactor (I think I'm on
the hook for this), so don't base anything on it as-is.

IIRC, we don't believe WEBGL_security_sensitive_resources is viably
implementable in a safe way (timing attacks!), so I suggest ignoring
it.

Extending texture uploads to allow dom::Element uploads is easily done
from a technical standpoint, but doing it efficiently will take novel
non-trivial work. (not every dom::Element has a Layer/Image)

Adding picking to WebGL is a non-starter.

>From an API standpoint, it could be interesting to try to use
ImageBitmaps as handles to snapshots of dom::Elements.

I think that it would be most efficient just to have a meeting about
these topics, instead of iterating slower via email.

-Jeff

On Mon, Jan 4, 2016 at 1:46 PM, Kearwood "Kip" Gilbert
 wrote:
> Hello All,
>
> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
> in front of the user.  Additionally, we often wish to show 2d graphics,
> video, and CSS animations as a texture in 3d scenes.  Creating these
> textures is something that CSS and HTML are great at.
>
> Unfortunately, I am not aware of an easy and efficient way to capture an
> animated of an interactive HTML Element and bring it into the WebGL
> context.  A "moz-element" -like API would be useful here.
>
> Perhaps we could solve this by implementing and extending the proposed
> WEBGL_dynamic_texture extension:
>
> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
> Essentially, we would extend the same API but allow the WDTStream
> interface to apply to more HTML elements, not just HTMLCanvasElement,
> HTMLImageElement, or HTMLVideoElement.
>
> We would also need to implement WEBGL_security_sensitive_resources to
> enforce the security model:
>
> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
> Does this sound like a good idea?  I feel that this is something that
> all WebGL developers would want, as it would make building front-ends
> for games much easier.
>
> If others feel the same, I would also like to follow up with a proposal
> to make the captured HTML elements interactive through use of an
> explicit "pick buffer" added to canvases.
>
> I look forward to your feedback.
>
> Cheers,
>   - Kearwood "Kip" Gilbert
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jeff Gilbert
On Mon, Jan 4, 2016 at 4:46 PM, Robert O'Callahan  wrote:
> On Tue, Jan 5, 2016 at 10:46 AM, Kearwood "Kip" Gilbert <
> kgilb...@mozilla.com> wrote:
>
>> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
>> in front of the user.  Additionally, we often wish to show 2d graphics,
>> video, and CSS animations as a texture in 3d scenes.  Creating these
>> textures is something that CSS and HTML are great at.
>>
>> Unfortunately, I am not aware of an easy and efficient way to capture an
>> animated of an interactive HTML Element and bring it into the WebGL
>> context.  A "moz-element" -like API would be useful here.
>>
>> Perhaps we could solve this by implementing and extending the proposed
>> WEBGL_dynamic_texture extension:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
>
> This proposal seems unnecessarily complex. Is there a way for me to send
> feedback?

Yeah, we have a public mailing list: public_we...@khronos.org
As with anything WebGL related, you can also just talk to me about it.

>
> Essentially, we would extend the same API but allow the WDTStream
>> interface to apply to more HTML elements, not just HTMLCanvasElement,
>> HTMLImageElement, or HTMLVideoElement.
>>
>> We would also need to implement WEBGL_security_sensitive_resources to
>> enforce the security model:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
>
> I wish I'd known about this proposal earlier! This looks pretty good,
> though I'd always thought this would be too complicated to spec and
> implement to be practical. Glad to be wrong! Although I think we should get
> as much feedback as possible on this in case of hidden gotchas.

Historically in our investigations of this, it seemed very very hard
to guarantee a lack of timing vectors, even just for arithmetic.
(Particularly since we're handing the sources to the driver, which
will try to optimize away as much as it can)

>
> Does this sound like a good idea?  I feel that this is something that
>> all WebGL developers would want, as it would make building front-ends
>>
> for games much easier.
>>
>
> Yes, I think together these would be very useful.

It only helps some games. Most game development efforts will likely be
unaffected, since it's generally easiest to use an existing engine,
which will already have such things handled.

Most games will either just use overlayed HTML, or have an in-engine
solution for UI. The desire to embed web elements in a scene itself is
relatively rare. (Certainly few existing games do this)

>
> If others feel the same, I would also like to follow up with a proposal
>> to make the captured HTML elements interactive through use of an
>> explicit "pick buffer" added to canvases.
>>
>
> How would that work? Being able to synthesize mouse (touch?) events in HTML
> elements would add another set of issues.
>
> I assume the idea of mixing CSS 3D-transformed elements into a WebGL scene
> has been rejected for some reason?

This can't reasonably be done given the level of abstraction provided
by GL APIs.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-07 Thread Jeff Gilbert
I think rather we should aim to provide a way for the application to
implement custom remapping of events. A simple version of this is
allowing the app to synthesize enough of the inputs to virtually use
our existing UI. I think this is much simpler than dredging around in
the WebGL API.

On Thu, Jan 7, 2016 at 4:12 PM, Robert O'Callahan <rob...@ocallahan.org> wrote:
> On Fri, Jan 8, 2016 at 11:08 AM, Jeff Gilbert <jgilb...@mozilla.com> wrote:
>>
>> On Wed, Jan 6, 2016 at 8:07 PM, Luke Wagner <lwag...@mozilla.com> wrote:
>> > FWIW, I was wondering if we could go a step further and allow
>> > (optional) user interaction with the rendered DOM elements.  That way
>> > you could, say, select text on a 3d surface with a mouse or use an
>> >  tag.  It seems like this would be possible if the vertex/pixel
>> > shaders were constrained to, say, affine transformations which starts
>> > to sound rather similar to the whitelisting approach mentioned
>> > elsewhere in this thread for mitigating timing attacks.
>>
>> I think this should be handled outside of the WebGL API. This seems to
>> be a bunch of extra work and complexity to handle a relatively narrow
>> use-case. I believe it's much easier to just provide functionality in
>> the platform to allow a library to implement such specific uses.
>
>
> Some features are always going to be really hard to provide with fully
> custom UI. For example , for security reasons. Also full
> custom UI for text entry is practically impossible to implement on par with
> browser built-in UI, due to the need to integrate with system IMEs,
> assistive tech, spellchecking dictionaries, etc.
>
> That's why I think finding ways to integrate live Web content into WebGL
> scenes would be really valuable.
>
> Rob
> --
> lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
> toD
> selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> rdsme,aoreseoouoto
> o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea lurpr
> .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
> esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-07 Thread Jeff Gilbert
On Wed, Jan 6, 2016 at 8:07 PM, Luke Wagner  wrote:
> FWIW, I was wondering if we could go a step further and allow
> (optional) user interaction with the rendered DOM elements.  That way
> you could, say, select text on a 3d surface with a mouse or use an
>  tag.  It seems like this would be possible if the vertex/pixel
> shaders were constrained to, say, affine transformations which starts
> to sound rather similar to the whitelisting approach mentioned
> elsewhere in this thread for mitigating timing attacks.

I think this should be handled outside of the WebGL API. This seems to
be a bunch of extra work and complexity to handle a relatively narrow
use-case. I believe it's much easier to just provide functionality in
the platform to allow a library to implement such specific uses.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-25 Thread Jeff Gilbert
On Wed, Nov 25, 2015 at 3:16 AM, Till Schneidereit
 wrote:
> FWIW, I received questions about this via private email and phone calls
> from two people working on extensions that support their products. Their
> extensions sit in the review queue with not chance of getting through it
> before the signing requirement kicks in. This puts them into a situation
> where their only reasonable course of action is to advise their users to
> switch browsers.
>

Is it just me, or does this sounds completely unacceptable. Sloughing
more users? Things like this are why it's hard not to be cynical.

I doubt anyone is going to switch to Firefox because our extension
signing is safe. (though I do think we should have some form of
signing) But they will gladly switch away when anything breaks,
particularly when we reduce the activation energy needed to switch: If
their extension won't work in new Firefox, it doesn't matter so much
that they won't have that extension in, say, Chrome.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A list of easy-ish gfxContext-to-DrawTarget conversions

2016-01-12 Thread Jeff Gilbert
For a similar situation, we used an etherpad to allow painless
acquisition of pending todos. Just move the instances you want
'assigned' to you into a  section in the etherpad. It's not
perfect, but it's an alternative to responding to the email, and makes
it easy to see the state of things.

On Tue, Jan 12, 2016 at 2:54 PM, Nicholas Nethercote
 wrote:
> Greetings,
>
> Over the past few weeks I have spent some time doing gfxContext-to-DrawTarget
> conversions. (Also nsRendering-to-DrawTarget conversions, which are basically
> the same because nsRenderingContext is a wafer-thin wrapper around 
> gfxContext.)
>
> The number of occurrences of gfxContext and nsRenderingContext is now about
> 20% lower than when I started (it went from 1813 to 1428), but I now need to
> work on other things. Below is a list of cases that I didn't get to, but which
> look like they might be easy-ish for people with some Thebes/Moz2D
> experience.
>
> If you do work on any of these, it might be worth replying to this message to
> indicate this, to reduce the chance of two people duplicating work.
>
> - gfxAlphaBoxBlur::Init(): the return value.
>
> - gfxAlphaBoxBlur::BlurInsetBox() + FillDestinationPath(): the parameters.
>
> - gfxAlphaBoxBlur::Paint() + DrawBlur(): the parameters.
>
> - nsNativeThemeWin::DrawWidgetBackground: the parameter, in tandem with
>   changing gfxWindowsNativeDrawing::mContext and
>   gfxWindowsNativeDrawing::BeginNativeDrawing().
>
> - nsCSSRendering::PaintGradient(): the parameter.
>
> - DocumentRenderParent::mCanvasContext.
>
> - ClipToContain() in BasicLayerManager.cpp: the parameter.
>
> - PaintLayerContext::mTarget.
>
> - InstallLayerClipPreserves3D() in BasicLayerManager.cpp: the parameter.
>
> - PaintWithMask() in BasicLayersImpl.cpp: the parameter.
>
> - gfxTextContextPaint::InitStrokeGeometry(): the parameter.
>
> - imgFrame::SurfaceForDrawing(): the parameter.
>
> - nsLayoutUtils::GetSnappedBaseline{X,Y}(): the parameter.
>
> - ComputeSnappedImageDrawingParameters(): the parameter.
>
> - The Android instance of nsPluginInstanceOwner::Paint(): the parameter.
>
> - nsSVGUtils::SetClipRect(): the parameter.
>
> - nsSVGUtils::Make{Fill,Stroke}PatternFor(): the parameter.
>
> - BufferAlphaColor::mContext.
>
> - DrawBackgroundColor() in nsCSSRendering.cpp: the parameter.
>
> - SetupBackgroundClip() in nsCSSRendering.cpp: the parameter.
>
> - MustCreateSurface() in ClippedImage.cpp.
>
> - nsCSSRendering::PaintGradient(): the nsRenderingContext parameter.
>
> - DumpFrames() in nsPrintEngine.cpp: this is commented-out code and it won't
>   compile because the nsRenderingContext is used in a bogus way (the call to
>   IsVisibleForPainting() has too many arguments). Perhaps all the
>   EXTENDED_DEBUG_PRINTING code can be removed? It looks ancient.
>
> - Three CreateReferenceRenderingContext() call sites only require a
>   DrawTarget (nsTextBoxFrame::DrawText(), CreateReferenceDrawTarget(),
>   LazyReferenceRenderingDrawTargetGetterFromFrame::GetRefDrawTarget()), so
>   creating a gfxContext is overkill.
>
> Finally, if I could choose a single function to magically Moz2Dify, it would 
> be
> AppUnitWidthOfStringBidi(). Changing its nsRenderingContext parameter to a
> DrawTarget would allow huge swathes of functions to be changed similarly, as
> well as nsBoxLayoutState::mRenderingContext and nsCSSOffsetState::rendContext.
> But it's a hard case; the embedded gfxContext end up being passed through
> nsFontMetrics::DrawString() to gfxTextRun::Draw() where it is put into a
> TextRunDrawParams and gets used in lots of complicated ways.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-05-31 Thread Jeff Gilbert
On Tue, May 31, 2016 at 4:39 PM, Nicholas Nethercote
 wrote:
> On Wed, Jun 1, 2016 at 1:05 AM, Benjamin Smedberg  
> wrote:
>> You shouldn't need to annotate the file/line separately, because that is
>> (or at least should be!) the top of the stack.
>
> Yes. Don't get hung up on the lack of annotations. It isn't much of a
> problem; you can click through easily enough. I have filed bug 1277104
> to fix the handful of instances that are showing up in practice, but
> it'll only be a minor improvement.

Perhaps this isn't meant for me then? I looked at the query from the
first post, but it's just noise to me. If it included the file that it
crashed from, it would suddenly be very useful, since it'd then be
trivial to see if there's something relevant to me.

As it stands now, the query alone doesn't seem useful to me. If it's
meant to be useful to developers who write MOZ_CRASHes, this is a
problem. If not, please ignore!

I would be extremely interested in MOZ_CRASHes and friends
automatically getting bugs filed and needinfo'd. An index of
crashes-by-file would get half-way there for me.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-06-01 Thread Jeff Gilbert
It would be useful to have a dashboard that collates this information better.

PS: Sarcasm is unhelpful.

On Tue, May 31, 2016 at 7:14 PM, Nicholas Nethercote
<n.netherc...@gmail.com> wrote:
> On Wed, Jun 1, 2016 at 11:26 AM, Jeff Gilbert <jgilb...@mozilla.com> wrote:
>>
>> Perhaps this isn't meant for me then? I looked at the query from the
>> first post, but it's just noise to me. If it included the file that it
>> crashed from, it would suddenly be very useful, since it'd then be
>> trivial to see if there's something relevant to me.
>
> Let's look at the top #3:
>
>
> 1  MOZ_CRASH(Shutdown too long, probably frozen, causing a crash.)
> 129715  9.92 %
>
> If you use your favourite source code search tool to look for
> "Shutdown too long", you'll find that this crash is occurring in
> toolkit/components/terminator/nsTerminator.cpp. For example, here's a
> DXR link:
>
> https://dxr.mozilla.org/mozilla-central/search?q=%22Shutdown+too+long%22=false
>
> The line in question looks like this:
>
>  MOZ_CRASH("Shutdown too long, probably frozen, causing a crash.");
>
>
> 2  MOZ_CRASH() 25987   1.99 %
>
> This one matches all calls to MOZ_CRASH() that don't provide a string
> parameter. Digging into these ones is slightly harder, requiring a new
> search for bugs that have "moz crash reason" set to "MOZ_CRASH()":
>
> https://crash-stats.mozilla.com/search/?product=Firefox_crash_reason=%3DMOZ_CRASH%28%29&_facets=signature&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform&_columns=moz_crash_reason#facet-signature
>
>
> 3  MOZ_CRASH(GFX: Unable to get a working D3D9 Compositor) 21040.16 %
>
> Searching for "working D3D9 Compositor" identifies this one as coming
> from gfx/layers/d3d9/CompositorD3D9.cpp.
>
>
> And so on. Searching for strings in code is a useful technique in many
> situations, I recommend it!
>
>
> BTW, thank you to all those who have already looked through the list
> and mentioned existing bugs and/or filed new bugs.
>
> Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-06-01 Thread Jeff Gilbert
Awesome, this sounds like what I was after. (though actual components
isn't really necessary. If that part is a pain point, I would prefer
to have a tool without it, than to have no tool)

On Wed, Jun 1, 2016 at 12:23 PM, Ted Mielczarek <t...@mielczarek.org> wrote:
> On Tue, May 31, 2016, at 09:26 PM, Jeff Gilbert wrote:
>> On Tue, May 31, 2016 at 4:39 PM, Nicholas Nethercote
>> <n.netherc...@gmail.com> wrote:
>> > On Wed, Jun 1, 2016 at 1:05 AM, Benjamin Smedberg <benja...@smedbergs.us> 
>> > wrote:
>> >> You shouldn't need to annotate the file/line separately, because that is
>> >> (or at least should be!) the top of the stack.
>> >
>> > Yes. Don't get hung up on the lack of annotations. It isn't much of a
>> > problem; you can click through easily enough. I have filed bug 1277104
>> > to fix the handful of instances that are showing up in practice, but
>> > it'll only be a minor improvement.
>>
>> Perhaps this isn't meant for me then? I looked at the query from the
>> first post, but it's just noise to me. If it included the file that it
>> crashed from, it would suddenly be very useful, since it'd then be
>> trivial to see if there's something relevant to me.
>>
>> As it stands now, the query alone doesn't seem useful to me. If it's
>> meant to be useful to developers who write MOZ_CRASHes, this is a
>> problem. If not, please ignore!
>>
>> I would be extremely interested in MOZ_CRASHes and friends
>> automatically getting bugs filed and needinfo'd. An index of
>> crashes-by-file would get half-way there for me.
>
> I believe at some point in the past we talked about trying to do a "top
> crashes by bug component" view, but maintaining that mapping was hard.
> It turns out that we're storing this data in moz.build files  nowadays
> (for example[1]), and we even have a web service on hg.mozilla.org to
> expose it for any given revision[2]. Unfortunately that web service is
> currently broken[3], but gps tells me he's just been delaying fixing it
> because there weren't any consumers complaining.
>
> When the source file from the last frame of the stack used to generate
> the signature points to hg.mozilla.org we could query that web service
> to get a bug component for the file in question and put that in the
> crash report, and expose it to queries. That would make it easy to get
> lists of crashes by component, which I think would do what people here
> are asking for. I filed bug 1277337 to track this idea.
>
> -Ted
>
> 1.
> https://dxr.mozilla.org/mozilla-central/rev/4d63dde701b47b8661ab7990f197b6b60e543839/dom/media/moz.build#7
> 2.
> http://mozilla-version-control-tools.readthedocs.io/en/latest/hgmo/mozbuildinfo.html
> 3. https://bugzilla.mozilla.org/show_bug.cgi?id=1263973
> 4. https://bugzilla.mozilla.org/show_bug.cgi?id=1277337
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style for C++ enums

2016-04-08 Thread Jeff Gilbert
Strong preference against eFoo, here. :)

Just use enum classes.

On Fri, Apr 8, 2016 at 10:35 AM, smaug  wrote:
> On 04/08/2016 07:38 PM, Nick Fitzgerald wrote:
>>
>> On Fri, Apr 8, 2016 at 9:29 AM, Birunthan Mohanathas <
>> birunt...@mohanathas.com> wrote:
>>
>>> On 8 April 2016 at 18:10, Kartikaya Gupta  wrote:

 Others?
>>>
>>>
>>> enum class OptionD {
>>>SentenceCaseValues,
>>>ThisWontBeConfusedWithOtherThings
>>>// ... because you need to use
>>> OptionD::ThisWontBeConfusedWithOtherThings
>>> };
>>>
>>
>> Strong +1 for enum classes. That way we don't need any more nasty
>> prefixing
>> and can
>> instead
>> rely on the compiler
>>
>> and type system.
>>
>
> enum classes are great, but I'd still use prefix for the values. Otherwise
> the values look like types - nested classes or such.
> (Keeping my reviewer's hat on, so thinking about readability here.)
>
>
>
> -Olli
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Build System Project - Update from the last 2 weeks

2016-04-08 Thread Jeff Gilbert
I thought Linux did LTO but not PGO?

On Tue, Apr 5, 2016 at 3:53 PM, Mike Hommey  wrote:
> On Tue, Apr 05, 2016 at 09:02:09PM +0100, David Burns wrote:
>> Below is a highlight of all work the build peers have done in the last 2
>> weeks as part of their work to modernise the build infrastructure.
>>
>> Since the last report[1] a large number of improvements have landed in
>> Mozilla Central.
>>
>> The build system now lazily installs test files. Before, the build copied
>> tens of thousands of test and support files. This could take dozens of
>> seconds on Windows or machines with slow I/O. Now, the build system defers
>> installing test files until they are needed there (e.g. when running tests
>> or creating test packages). Furthermore, only the test files relevant to
>> the action performed are installed. Mach commands running tests should be
>> significantly faster, as they no longer examine the state of tens of
>> thousands of files on every invocation.
>>
>> After upgrading build machines to use VS2015, we have seen a decrease in
>> build times[2] for PGO on Windows by around 100 minutes. This brings PGO
>> times on Windows in line with that of PGO(Strictly speaking this is LTO)
>> times on Linux.
>
> Just a nit: strictly speaking Windows builds do PGO+LTO, Linux builds do
> PGO, but not LTO.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style for C++ enums

2016-04-11 Thread Jeff Gilbert
On Mon, Apr 11, 2016 at 4:00 PM, Bobby Holley <bobbyhol...@gmail.com> wrote:
> On Mon, Apr 11, 2016 at 2:12 PM, Jeff Gilbert <jgilb...@mozilla.com> wrote:
>> I propose that if you're in a part of the codebase where you can't
>> tell if it's an enum or a function pointer (regardless of the fact
>> that the compiler can't confuse these), you should do more looking
>> around before working on it, much less reviewing changes to the code.
>> If you can't tell if it's an enum, you're not going to be able to
>> effectively make changes, so buckle up and do some looking around.
>
>
> I strongly disagree with this statement and the sentiment behind it. The
> ability for certain people to be able to quickly review changes in code that
> they don't have paged into working memory is vital to the survival of Gecko.

Well I disagree in the long term. If we are going to survive, we
*must* minimize such requirements. Skimping on code review should be
low on our list of options, given its propensity to create bugs and
accrue technical debt in the name of moving fast in the short term.

It's a tough job. I definitely understand. But good review is a
necessity. We have a number of problems that are "harsh realities",
but I hope we're trying to fix what we can, not optimizing assuming
failure.

> Mozilla is very bad at distributing review loads. There are literally a
> handful of people holding the ship together (and smaug is one of them). It
> is not a fun job, and the bus-factor is frighteningly low. This means that
> we should absolutely give more weight to the preferences of top reviewers
> because:
> (a) We owe it to them, and
> (b) We want to make it easier for other people to join their ranks.

Why do we have to be bad at distributing review loads? Is it a tooling
problem? And on-boarding problem? Why can't we reduce our need and/or
spread the load?

It's not pleasant to try to discuss these things only to be told that
I don't review enough code to matter. A bunch of us review code, and
yes, some more than others. I don't know where to find actual data,
but there is a panel helping with MozReview feedback whose members
were selected from 20 people chosen by virtue of being "top reviewers
by volume", a group which to which smaug, I, and at least 18 others
belong.

There are people (including smaug) who review more code than me. Their
opinions *do* have more weight than mine. However, I hope I my
arguments are still heard and granted what weight they are entitled
to, particularly since many people I talk with in person are strongly
opposed to changes here and elsewhere, but reluctant to join these
often-frustrating discussions.

It's always better when we have discussions based on data rather than
opinions, but when we're down to opinions, it's really hard to judge
support via email. There's no good "+1" option without  spamming, and
many people just don't want to deal with the stress that merely
following these discussions can incur.

>> To go completely off the rails, I no longer even believe that all of
>> Gecko should have a single style.
>
>
> Yes, that is off the rails in this context.
>
>>
>> I think the whole attempt is
>> increasingly a distraction vs the alternative, and I say this as
>> someone who writes and reviews under at least three differently-styled
>> code areas. The JS engine will, as ever, remain distinct
>
>
> Jason agreed to converge SM style with Gecko.

The SM engineers I talked to are dismissive of this. I will solicit responses.

>> , as will
>> third-party code, and also all non-actively maintained files. (most of
>> our codebase)
>>
>> Yes, a formatter could magic-wand us to most of the superficial
>> aspects of a single code style, but we'd then need to backfill
>> readability into some of the trickier bits. Sometimes we need to step
>> outside the prevailing style to keep something reasonably readable.
>>
>> But we /already have/ a process for keeping things readable: Code reviews.
>> And if that fails: Module ownership.
>>
>> I'm not sure why we hold centralization in such high regard when it
>> comes to code style. And, if we are continuing to centralize this,
>> we're going to need to have an actual process for making decisions
>> here.
>>
>> I suspect the goal is to have an oracle we can ask how we should
>> format code in order to be readable. However, we can't even agree on
>> it, so the right answer often is indeterminate. However, people within
>> their modules have a better idea of what works in each case.
>>
>> I do think we should have some high-level style guidance, but I think
>> we're increasingly micromanaging style. Let code review take care of
>&g

Re: Coding style for C++ enums

2016-04-08 Thread Jeff Gilbert
It's noisy in code you *do* understand, which is the bulk of the code
we should be dealing with the majority of the time.

I do not parse this initially as a type because that generally doesn't
make sense given context.
Generally the names involved are also unlikely to be types, based on name alone.

It would also add to our already-pungent code smell.

On Fri, Apr 8, 2016 at 3:03 PM, Mats Palmgren <m...@mozilla.com> wrote:
> On 2016-04-08 23:03, Jeff Gilbert wrote:
>>
>> Strong preference against eFoo, here. :)
>
>
> Strong preference *for* eFoo, here. :)
>
> If I see Bar::Foo anywhere in code I'm not familiar with, my brain
> is likely to first parse that as a type before realizing that, hmm
> that doesn't make sense in an expression context, and then I will
> likely have to lookup what that silly Bar thing is just to be sure.
>
> eFoo is unambiguous and utterly clear.
>
> /Mats
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style for C++ enums

2016-04-11 Thread Jeff Gilbert
I'm not sure how this is compromise. We were already supposed to use
enum classes with new code.

Every additional glyph incurs mental load. Code should instead be
written so these things are not necessary to explicitly state. *That*
is the code we should be trying to write. In well-written code,
superfluous warts detract from readability.

At this point, it feels a lot like I could propose full hungarian
notation to this list piecemeal, and there would be ongoing
'compromises' to add it one step at a time. After all, who wants to
trace back to the variable declaration when the variable name can tell
us what type it is?


I propose that if you're in a part of the codebase where you can't
tell if it's an enum or a function pointer (regardless of the fact
that the compiler can't confuse these), you should do more looking
around before working on it, much less reviewing changes to the code.
If you can't tell if it's an enum, you're not going to be able to
effectively make changes, so buckle up and do some looking around.


To go completely off the rails, I no longer even believe that all of
Gecko should have a single style. I think the whole attempt is
increasingly a distraction vs the alternative, and I say this as
someone who writes and reviews under at least three differently-styled
code areas. The JS engine will, as ever, remain distinct, as will
third-party code, and also all non-actively maintained files. (most of
our codebase)

Yes, a formatter could magic-wand us to most of the superficial
aspects of a single code style, but we'd then need to backfill
readability into some of the trickier bits. Sometimes we need to step
outside the prevailing style to keep something reasonably readable.

But we /already have/ a process for keeping things readable: Code reviews.
And if that fails: Module ownership.

I'm not sure why we hold centralization in such high regard when it
comes to code style. And, if we are continuing to centralize this,
we're going to need to have an actual process for making decisions
here.

I suspect the goal is to have an oracle we can ask how we should
format code in order to be readable. However, we can't even agree on
it, so the right answer often is indeterminate. However, people within
their modules have a better idea of what works in each case.

I do think we should have some high-level style guidance, but I think
we're increasingly micromanaging style. Let code review take care of
the specifics here. If it's not readable without prefixes, definitely
consider adding prefixes. Otherwise don't require it.


FWIW, I do have a preference for k prefix instead of e prefix, (if we
must have a required prefix) since they are in the same 'constants'
category. 'k' I can be persuaded the usefulness of for enums. 'e', not
so much.


On Mon, Apr 11, 2016 at 8:00 AM, Kartikaya Gupta  wrote:
> Based on all the feedback so far I think the best compromise is to use
> enum classes with the "e" prefix on the values. As was mentioned, the
> "e" prefix is useful to distinguish between enum values and function
> pointer passing at call sites, but is a small enough prefix that it
> shouldn't be too much of a burden.
>
> I realize not everybody is going to be happy with this but I don't
> want this discussion to drag on to the point where it's a net loss of
> productivity no matter what we end up doing. I'll update the style
> guide.
>
> Thanks all!
>
> kats
>
>
> On Sun, Apr 10, 2016 at 11:43 AM, smaug  wrote:
>> On 04/10/2016 05:50 PM, Aryeh Gregor wrote:
>>>
>>> On Fri, Apr 8, 2016 at 8:35 PM, smaug  wrote:

 enum classes are great, but I'd still use prefix for the values.
 Otherwise the values look like types - nested classes or such.
 (Keeping my reviewer's hat on, so thinking about readability here.)
>>>
>>>
>>> In some cases I think it's extremely clear, like this:
>>>
>>>enum class Change { minus, plus };
>>>
>>> whose sole use is as a function parameter (to avoid a boolean
>>> parameter), like so:
>>>
>>>ChangeIndentation(*curNode->AsElement(), Change::plus);
>>>
>>> In cases where it might be mistaken for a class, it might make more
>>> sense to prefix the enum class name with "E".  I don't think this
>>> should be required as a general policy, though, because for some enums
>>> it might be clear enough without it.
>>>
>>
>> Indeed in your case it is totally unclear whether one is passing a function
>> pointer or enum value as param.
>> If the value was prefixed with e, it would be clear.
>>
>> Consistency helps with reviewing (and in general code readability) a lot, so
>> the less we have optional coding style rules, or
>> context dependent rules, the better.
>>
>>
>>
>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> 

Re: Proposing preferring Clang over GCC for developer buidls

2016-03-02 Thread Jeff Gilbert
For standard development builds, --enable-debug build speed is (to me)
the primary motivator since we've guaranteed that they're equally
correct. (within reason) I'll gladly run some extra tests to gather
data about this.

FWIW, with a 26% speedup, it would definitely seems like it'd be worth
investigating also for automation. Is anyone/anybug driving that?

On Wed, Mar 2, 2016 at 2:50 PM, Gregory Szorc  wrote:
> Over in bug 1253064 I'm proposing switching developer builds to prefer
> Clang over GCC because the limited numbers we have say that Clang can build
> mozilla-central several minutes faster than GCC (13 minutes vs 17.5 on my
> I7-6700K). I'm not proposing switching what we use to produce builds in
> automation. I'm not proposing dropping GCC support. This is all about
> choosing faster defaults so people don't spend as much time waiting for
> builds to complete and become more productive as a result.
>
> Please comment on the bug if you have something to add.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposing preferring Clang over GCC for developer buidls

2016-03-02 Thread Jeff Gilbert
On Wed, Mar 2, 2016 at 3:45 PM, Mike Hommey  wrote:
> More importantly, changing the official toolchain has implications on
> performance.

Sorry, I meant for general automation. Our final spins (especially
LTO/PGO builds) should remain whatever gives us maximum perf. (not
making any claims myself here!)

Our PGO/LTO builds can take 10x+ what our normal integration builds
take if it nets us a few percentage points of runtime perf.

I suppose it becomes a question of divergence between fast-building
builds and 'final' PGO/fully-optimized builds. We already have this to
some extent with PGO vs non-PGO builds.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-21 Thread Jeff Gilbert
FWIW, I use static Create functions for fallible heap-only objects,
and StackFallible::ctor(bool* const out_success/error) for the rarer
fallible stack-createable objects.

It sounds like this lines up with existing proposals here.

Example fallible heap-only:
https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/gfx/gl/GLScreenBuffer.cpp#46
https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/gfx/gl/SharedSurfaceEGL.cpp#19

Example fallible stack-createable:
https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/dom/canvas/WebGLContextDraw.cpp#254
(Definition is at the top of the file)

Generally my preference for static Create functions is that they don't
allocate the actual class until they are infallible, and thus they are
never in partially-valid states. The ctor is almost always trivial, so
as to keep all the logic centralized in the Create function.

Static Create funcs with thin ctors also allow you to use const to a
greater extent, which can be really nice. (many getters can become
direct access to public const members)


Shoehorning is not my favorite. I'd much rather get the minor
refactoring done to modernize the cruft, rather than limp it along.

On Thu, Apr 21, 2016 at 8:36 AM, Nick Fitzgerald
 wrote:
> On Thu, Apr 21, 2016 at 3:24 AM, Nicholas Nethercote > wrote:
>
>> On Thu, Apr 21, 2016 at 7:38 PM, Nicolas Silva 
>> wrote:
>> > Fallible construction (even with a way to report failure) is annoying if
>> > only because the object's destructor has to account for the possible
>> > invalid states. I much prefer having a static creation method that will
>> > only instantiate the object in case of success, and mark the constructor
>> > protected. Something like:
>> >
>> > static
>> > already_AddRefed Foo::Create(SomeParams...) {
>> > // logic that may fail...
>> > if (failed) {
>> >  return nullptr;
>> > }
>> > return MakeAndAddRef(stuffThatDidNotFail);
>> > }
>>
>> So instead of doing the infallible part in the constructor and the
>> fallible part in the Init() function, you're suggesting doing the
>> fallible part first and only calling the constructor once it succeeds?
>> Interesting. I can see it would work nicely for some heap-allocated
>> objects. But I see two possible problems:
>>
>> - It doesn't appear to work with stack-allocated objects?
>>
>
> I think `mozilla::Maybe` works well, as mentioned elsewhere in the
> thread. Here is an example of a non-boxed class whose ctor is infallible
> and has a factory method returning a `Maybe` where
> `None` indicates failure:
> https://dxr.mozilla.org/mozilla-central/rev/4feb4dd910a5a2d3061dbdd376a80975206819c6/js/public/UbiNodeDominatorTree.h#207-274
>
> In general I agree with Nicolas: use patterns that make half-constructed
> things impossible and avoid the whole mess.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Out parameters, References vs. Pointers (was: Proposal: use nsresult& outparams in constructors to represent failure)

2016-04-21 Thread Jeff Gilbert
Pointers are prefereable for outparams because it makes it clearer
what's going on at the callsite. (at least indicating that something
non-trivial is happening)

On Wed, Apr 20, 2016 at 8:07 PM, Kan-Ru Chen (陳侃如)  wrote:
> Nicholas Nethercote  writes:
>
>> Hi,
>>
>> C++ constructors can't be made fallible without using exceptions. As a 
>> result,
>> for many classes we have a constructor and a fallible Init() method which 
>> must
>> be called immediately after construction.
>>
>> Except... there is one way to make constructors fallible: use an |nsresult&
>> aRv| outparam to communicate possible failure. I propose that we start doing
>> this.
>
> Current coding style guidelines suggest that out parameters should use
> pointers instead of references. The suggested |nsresult&| will be
> consistent with |ErrorResult&| usage from DOM but against many other out
> parameters, especially XPCOM code.
>
> Should we special case that nsresult and ErrorResult as output
> parameters should always use references, or make it also the default
> style for out parameters?
>
> I think this topic has been discussed before didn't reach a
> consensus. Based the recent effort to make the code using somewhat
> consistent style, should we expend on this on the wiki?
>
>Kanru
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Generating Visual Studio project files by default

2016-05-24 Thread Jeff Gilbert
What's the build-time impact of this?

On Tue, May 24, 2016 at 4:00 PM, Gregory Szorc  wrote:
> Coming soon to your local builds, Visual Studio project files will be
> generated automatically when building on Windows because we want to
> encourage more people to use them because fully-featured IDEs can be
> productivity wins.
>
> The Visual Studio projects now automatically target the Visual Studio
> version being used (1 week ago they would always generate VS2013 project
> files). Another new change is that unless the build configuration changes,
> the Visual Studio files won't be updated by the build system unless they
> need changed. That means if you do a pull+build when Visual Studio is open,
> you shouldn't get tons of warnings that you need to reload files to pick up
> changes.
>
> The Visual Studio integration isn't perfect. We'd like to encourage more
> development on Windows because that's where most Firefox users are. So if
> you'd like improvements to the Visual Studio project files, please file
> Core :: Build Config bugs.
>
> This change was tracked in bug 1275297. Bug 1275419 tracks a follow-up to
> allow disabling their generation.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
I don't visually like ||/&& at start of line, but I can't fault the
reasoning, so I'm weakly for it.
I don't think it's important enough to change existing code though.

On Thu, Feb 16, 2017 at 1:47 PM,   wrote:
> Question of the day:
> When breaking overlong expressions, should &&/|| go at the end or the 
> beginning of the line?
>
> TL;DR: Coding style says 'end', I think we should change it to 
> 'beginning' for better clarity, and consistency with other operators.
>
>
> Our coding style reads:
> "Break long conditions after && and || logical connectives. See below for the 
> rule for other operators." [1]
> """
> Overlong expressions not joined by && and || should break so the operator 
> starts on the second line and starts in the same column as the start of the 
> expression in the first line. This applies to ?:, binary arithmetic operators 
> including +, and member-of operators (in particular the . operator in 
> JavaScript, see the Rationale).
>
> Rationale: operator at the front of the continuation line makes for faster 
> visual scanning, because there is no need to read to end of line. Also there 
> exists a context-sensitive keyword hazard in JavaScript; see bug 442099, 
> comment 19, which can be avoided by putting . at the start of a continuation 
> line in long member expression.
> """ [2]
>
>
> I initially focused on the rationale, so I thought *all* operators should go 
> at the front of the line.
>
> But it seems I've been living a lie!
> &&/|| should apparently be at the end, while other operators (in some 
> situations) should be at the beginning.
>
>
> Now I personally think this just doesn't make sense:
> - Why the distinction between &&/|| and other operators?
> - Why would the excellent rationale not apply to &&/||?
> - Pedantically, the style talks about 'expression *not* joined by &&/||, but 
> what about expression that *are* joined by &&/||? (Undefined Behavior!)
>
> Based on that, I believe &&/|| should be made consistent with *all* 
> operators, and go at the beginning of lines, aligned with the first operand 
> above.
>
> And therefore I would propose the following changes to the coding style:
> - Remove the lonely &&/|| sentence at [1].
> - Rephrase the first sentence at [2] to something like: "Overlong expressions 
> should break so that the operator starts on the following line, in the same 
> column as the first operand for that operator. This applies to all binary 
> operators, including member-of operators (in particular the . operator in 
> JavaScript, see the Rationale), and extends to ?: where the 2nd and third 
> operands should be on separate lines and start in the same column as the 
> first operand."
> - Keep the rationale at [2].
>
> Also, I think we should add something about where to break expressions with 
> operators of differing precedences, something like: "Overlong expressions 
> containing operators of differing precedences should first be broken at the 
> operator of lowest precedence. E.g.: 'a+b*c' should be split at '+' before 
> '*'"
>
>
> A bit more context:
> Looking at the history of the coding style page, a certain "Brendan" wrote 
> that section in August 2009 [3], shortly after a discussion here [4] that 
> seemed to focus on the dot operator in Javascript. In that discussion, &&/|| 
> appear in examples at the end of lines and nobody talks about that (because 
> it was not the main subject, and/or everybody agreed with it?)
>
> Discuss!
>
>
> [1] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Control_Structures
> [2] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Operators
> [3] 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style$compare?locale=en-US=7315=7314
> [4] 
> https://groups.google.com/d/msg/mozilla.dev.platform/Ji9lxlLCYME/zabUmQI9S-sJ
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
Both are readable with judicious indentation on wrapping. If you must
recompose the entire expression in your mind, the code is too terse,
and we should consider cleaning it up and making it more readable.
&&/|| position will not make or break this.

On Thu, Feb 16, 2017 at 3:10 PM, Jean-Yves Avenard
<jyaven...@mozilla.com> wrote:
> Hi
>
>
> On 16/02/17 23:56, Jeff Gilbert wrote:
>>
>> I don't visually like ||/&& at start of line, but I can't fault the
>> reasoning, so I'm weakly for it.
>> I don't think it's important enough to change existing code though.
>
>
> Disclaimer: I'm now biased about this.
>
> I've been writing C and C++ code for now most of my life. And during that
> time you take strong habits. Not always good habits, but there are there.
>
> Now having operators at the end of the line, including logical ones like &&
> and || is something I've always done, because it's just the way things were
> done.
> Never really thought much about it, you place the operator at the end of the
> line when you're splitting a long logical operations.
>
> but is it the proper things to do?
> Just like Gerald above, I had misread (or more accurately understood
> differently) what the rules were in that document, because there's various
> discrepancies in it.
>
> When you read code, especially during review, the review process is made
> much more easily when you can at a glance understand one particular logical
> calculation.
>
> reading this: (turn on courier)
>
>   return ((aCodecMask & VPXDecoder::VP8)
>   && aMimeType.EqualsLiteral("video/webm; codecs=vp8"))
>  || ((aCodecMask & VPXDecoder::VP9)
>  && aMimeType.EqualsLiteral("video/webm; codecs=vp9"))
>  || ((aCodecMask & VPXDecoder::VP9)
>  && aMimeType.EqualsLiteral("video/vp9"));
>
> than:
>   return ((aCodecMask & VPXDecoder::VP8) &&
>   aMimeType.EqualsLiteral("video/webm; codecs=vp8")) ||
>  ((aCodecMask & VPXDecoder::VP9) &&
>   aMimeType.EqualsLiteral("video/webm; codecs=vp9")) ||
>  ((aCodecMask & VPXDecoder::VP9) &&
>   aMimeType.EqualsLiteral("video/vp9"));
>
> where does the || apply, where does the && ?
> I must recompose the entire expression in my mind to understand what's going
> on.
> The previous one require no such mental process.
>
> If we are talking about new code, the question becomes, can this become the
> rule *and* can we add such rule to clang-format
> seeing that we're in the process of applying clang-format to the entire
> source tree (with exceptions), modifying existing code would become a moot
> issue.
>
> Cheers
> JY
>
>
>> On Thu, Feb 16, 2017 at 1:47 PM,  <gsquel...@mozilla.com> wrote:
>>>
>>> Question of the day:
>>> When breaking overlong expressions, should &&/|| go at the end or the
>>> beginning of the line?
>>>
>>> TL;DR: Coding style says 'end', I think we should change it to
>>> 'beginning' for better clarity, and consistency with other operators.
>>>
>>>
>>> Our coding style reads:
>>> "Break long conditions after && and || logical connectives. See below for
>>> the rule for other operators." [1]
>>> """
>>> Overlong expressions not joined by && and || should break so the operator
>>> starts on the second line and starts in the same column as the start of the
>>> expression in the first line. This applies to ?:, binary arithmetic
>>> operators including +, and member-of operators (in particular the . operator
>>> in JavaScript, see the Rationale).
>>>
>>> Rationale: operator at the front of the continuation line makes for
>>> faster visual scanning, because there is no need to read to end of line.
>>> Also there exists a context-sensitive keyword hazard in JavaScript; see bug
>>> 442099, comment 19, which can be avoided by putting . at the start of a
>>> continuation line in long member expression.
>>> """ [2]
>>>
>>>
>>> I initially focused on the rationale, so I thought *all* operators should
>>> go at the front of the line.
>>>
>>> But it seems I've been living a lie!
>>> &&/|| should apparently be at the end, while other operators (in some
>>> situations) should be at the beginning.
>>>
>>>
>>> Now I personally think this just doesn't make sense:
>>

Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
If there are so many parens that we require an editor to disambiguate
them, it's too dense. Add whitespace or decompose the expression until
readable.

On Thu, Feb 16, 2017 at 3:15 PM, Mike Hommey  wrote:
> On Fri, Feb 17, 2017 at 12:10:32AM +0100, Jean-Yves Avenard wrote:
>> reading this: (turn on courier)
>>
>>   return ((aCodecMask & VPXDecoder::VP8)
>>   && aMimeType.EqualsLiteral("video/webm; codecs=vp8"))
>>  || ((aCodecMask & VPXDecoder::VP9)
>>  && aMimeType.EqualsLiteral("video/webm; codecs=vp9"))
>>  || ((aCodecMask & VPXDecoder::VP9)
>>  && aMimeType.EqualsLiteral("video/vp9"));
>>
>> than:
>>   return ((aCodecMask & VPXDecoder::VP8) &&
>>   aMimeType.EqualsLiteral("video/webm; codecs=vp8")) ||
>>  ((aCodecMask & VPXDecoder::VP9) &&
>>   aMimeType.EqualsLiteral("video/webm; codecs=vp9")) ||
>>  ((aCodecMask & VPXDecoder::VP9) &&
>>   aMimeType.EqualsLiteral("video/vp9"));
>>
>> where does the || apply, where does the && ?
>> I must recompose the entire expression in my mind to understand what's going
>> on.
>> The previous one require no such mental process.
>
> ... assuming the indentation and use of parens is correct, which it
> might not be, so you still have to look closely.
>
> Which brings an interesting observation: the review tools are not
> helpful in that situation, while a local editor would allow you to know
> which closing parens corresponds to which opening one.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
I'll not get into why I disagree with your points about
format-the-world here. Let's leave that for elsewhere.

Here's my take on the example at hand:
https://pastebin.mozilla.org/8979527
In this case, I do like the latter two best. (Old-style and simplification)

On Thu, Feb 16, 2017 at 3:25 PM, L. David Baron <dba...@dbaron.org> wrote:
> On Thursday 2017-02-16 15:11 -0800, Jeff Gilbert wrote:
>> I would not assume that's necessarily going to happen without
>> contention. Consistency is not a goal in and of itself.
>
> Using clang-format on the entire tree has the massive value of:
>
>  * reviewers not needing to point out whitespace and indentation issues
>
>  * reduced friction for new contributors (being able to use a tool
>instead of repeatedly iterating on review comments)
>
>  * reduced friction for existing contributors working in new areas
>of the codebase
>
> which outweighs indvidual style preferences or historic module
> differences here.
>
> We should try to do something as close as reasonably possible to the
> existing dominant style so that we disrupt annotate/blame the least.
>
>
> As for the proposal to switch to start-of-line && and || -- I
> wouldn't want to see us try to make such a change *before* we
> clang-format the entire tree, because we'll just create an
> inconsistent mess that makes it harder to figure out what the local
> style is.  I'd be slightly more willing to consider it *after* we
> clang-format the whole tree (or simultaneously, which is perhaps
> better for blame since there's only one revision to skip), but then
> we need to balance it against the cost of the increased disruption
> to blame that makes understanding the history of our code harder.
>
> -David
>
>> On Thu, Feb 16, 2017 at 3:04 PM,  <gsquel...@mozilla.com> wrote:
>> > There's an ongoing effort to use clang-format to automatically format All 
>> > The Codes [1], so I think we should try and make sure we agree (or at 
>> > least settle*) on a coding style when it's going to be enforced 
>> > everywhere. ;-)
>> >
>> > * I'm presenting my own preferences here, but in the end I'm fine with 
>> > complying with whatever we can all agree to live with. Consistence is more 
>> > important than personal preferences.
>> >
>> > g.
>> >
>> > On Friday, February 17, 2017 at 9:57:04 AM UTC+11, Jeff Gilbert wrote:
>> >> I don't visually like ||/&& at start of line, but I can't fault the
>> >> reasoning, so I'm weakly for it.
>> >> I don't think it's important enough to change existing code though.
>> >>
>> >> On Thu, Feb 16, 2017 at 1:47 PM,  <gsqu...@mozilla.com> wrote:
>> >> > Question of the day:
>> >> > When breaking overlong expressions, should &&/|| go at the end or the 
>> >> > beginning of the line?
>> >> >
>> >> > TL;DR: Coding style says 'end', I think we should change it to 
>> >> > 'beginning' for better clarity, and consistency with other operators.
>> >> >
>> >> >
>> >> > Our coding style reads:
>> >> > "Break long conditions after && and || logical connectives. See below 
>> >> > for the rule for other operators." [1]
>> >> > """
>> >> > Overlong expressions not joined by && and || should break so the 
>> >> > operator starts on the second line and starts in the same column as the 
>> >> > start of the expression in the first line. This applies to ?:, binary 
>> >> > arithmetic operators including +, and member-of operators (in 
>> >> > particular the . operator in JavaScript, see the Rationale).
>> >> >
>> >> > Rationale: operator at the front of the continuation line makes for 
>> >> > faster visual scanning, because there is no need to read to end of 
>> >> > line. Also there exists a context-sensitive keyword hazard in 
>> >> > JavaScript; see bug 442099, comment 19, which can be avoided by putting 
>> >> > . at the start of a continuation line in long member expression.
>> >> > """ [2]
>> >> >
>> >> >
>> >> > I initially focused on the rationale, so I thought *all* operators 
>> >> > should go at the front of the line.
>> >> >
>> >> > But it seems I've been living a lie!
>> >> > &&/|| should apparently be at the end, while other operators (in some 
>&

Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jeff Gilbert
I would not assume that's necessarily going to happen without
contention. Consistency is not a goal in and of itself.

On Thu, Feb 16, 2017 at 3:04 PM,  <gsquel...@mozilla.com> wrote:
> There's an ongoing effort to use clang-format to automatically format All The 
> Codes [1], so I think we should try and make sure we agree (or at least 
> settle*) on a coding style when it's going to be enforced everywhere. ;-)
>
> * I'm presenting my own preferences here, but in the end I'm fine with 
> complying with whatever we can all agree to live with. Consistence is more 
> important than personal preferences.
>
> g.
>
> On Friday, February 17, 2017 at 9:57:04 AM UTC+11, Jeff Gilbert wrote:
>> I don't visually like ||/&& at start of line, but I can't fault the
>> reasoning, so I'm weakly for it.
>> I don't think it's important enough to change existing code though.
>>
>> On Thu, Feb 16, 2017 at 1:47 PM,  <gsqu...@mozilla.com> wrote:
>> > Question of the day:
>> > When breaking overlong expressions, should &&/|| go at the end or the 
>> > beginning of the line?
>> >
>> > TL;DR: Coding style says 'end', I think we should change it to 
>> > 'beginning' for better clarity, and consistency with other operators.
>> >
>> >
>> > Our coding style reads:
>> > "Break long conditions after && and || logical connectives. See below for 
>> > the rule for other operators." [1]
>> > """
>> > Overlong expressions not joined by && and || should break so the operator 
>> > starts on the second line and starts in the same column as the start of 
>> > the expression in the first line. This applies to ?:, binary arithmetic 
>> > operators including +, and member-of operators (in particular the . 
>> > operator in JavaScript, see the Rationale).
>> >
>> > Rationale: operator at the front of the continuation line makes for faster 
>> > visual scanning, because there is no need to read to end of line. Also 
>> > there exists a context-sensitive keyword hazard in JavaScript; see bug 
>> > 442099, comment 19, which can be avoided by putting . at the start of a 
>> > continuation line in long member expression.
>> > """ [2]
>> >
>> >
>> > I initially focused on the rationale, so I thought *all* operators should 
>> > go at the front of the line.
>> >
>> > But it seems I've been living a lie!
>> > &&/|| should apparently be at the end, while other operators (in some 
>> > situations) should be at the beginning.
>> >
>> >
>> > Now I personally think this just doesn't make sense:
>> > - Why the distinction between &&/|| and other operators?
>> > - Why would the excellent rationale not apply to &&/||?
>> > - Pedantically, the style talks about 'expression *not* joined by &&/||, 
>> > but what about expression that *are* joined by &&/||? (Undefined Behavior!)
>> >
>> > Based on that, I believe &&/|| should be made consistent with *all* 
>> > operators, and go at the beginning of lines, aligned with the first 
>> > operand above.
>> >
>> > And therefore I would propose the following changes to the coding style:
>> > - Remove the lonely &&/|| sentence at [1].
>> > - Rephrase the first sentence at [2] to something like: "Overlong 
>> > expressions should break so that the operator starts on the following 
>> > line, in the same column as the first operand for that operator. This 
>> > applies to all binary operators, including member-of operators (in 
>> > particular the . operator in JavaScript, see the Rationale), and extends 
>> > to ?: where the 2nd and third operands should be on separate lines and 
>> > start in the same column as the first operand."
>> > - Keep the rationale at [2].
>> >
>> > Also, I think we should add something about where to break expressions 
>> > with operators of differing precedences, something like: "Overlong 
>> > expressions containing operators of differing precedences should first be 
>> > broken at the operator of lowest precedence. E.g.: 'a+b*c' should be split 
>> > at '+' before '*'"
>> >
>> >
>> > A bit more context:
>> > Looking at the history of the coding style page, a certain "Brendan" wrote 
>> > that section in August 2009 [3], shortly after a discussion here [4] that 
>> > seemed to focus on

Re: Changing the representation of rectangles in platform code

2017-02-10 Thread Jeff Gilbert
Reducing overflow risk and simplifying intersection both seem worth
it, but godspeed whoever works on this!

On Fri, Feb 10, 2017 at 12:45 PM, Milan Sreckovic
 wrote:
> First step needs to happen completely before the second step does, so I
> guess the danger is that we start and give up before we get to step 2.  I
> don't think that will happen, but it is something we should always think
> about.
>
> Third step - sure, I can see this not getting completed - examining places
> that do xmax = xmin + width instead of just getting the max X, for example.
>
>
> On 09-Feb-17 20:56, Nicholas Nethercote wrote:
>>
>> My concern is that  this will get partially done but never completed, and
>> we'll end up in a situation where we have two styles in place. We have a
>> bad track record with that sort of thing.
>>
>> Nick
>>
>> On Thu, Feb 9, 2017 at 1:45 PM, Jeff Muizelaar  wrote:
>>
>>> It's not very easy to reason about overflow issues with our current
>>> representation. This means that we currently just pretend that they
>>> don't happen. The idea for changing the representation came up in
>>> response to a security bug where we didn't really have a better
>>> solution.
>>>
>>> Changing to x1, x2, y1, y2 will allow us to match the pixman rectangle
>>> representation that we use for regions. The region code shows quite a
>>> bit profiles so even a small improvement in this area is nice to have.
>>>
>>> -Jeff
>>>
>>> On Wed, Feb 8, 2017 at 9:19 PM, David Major  wrote:

 Is there a specific problem that's being solved by this proposal? It
>>>
>>> would

 be helpful to make this a bit more concrete, like "these benchmarks go
 x%
 faster", or "here's a list of overflow bugs that will just vanish", or
 "here's some upcoming work that this would facilitate".

 On Thu, Feb 9, 2017 at 1:56 PM, Botond Ballo  wrote:
>
> Hi everyone!
>
> I would like to propose changing the internal representation of
> rectangles in platform code.
>
> We currently represent a rectangle by storing the coordinates of its
> top-left corner, its width, and its height. I'll refer to this
> representation as "x/y/w/h".
>
> I would like to propose storing instead the coordinates of the
> top-left corner, and the coordinates of the bottom-right corner. I'll
> refer to this representation as "x1/y1/x2/y2".
>
> The x1/y1/x2/y2 representation has several advantages over x/y/w/h:
>
>- Several operations are more efficient with x1/y1/x2/y2, including
> intersection,
>  union, and point-in-rect.
>- The representation is more symmetric, since it stores two
> quantities
> of the
>  same kind (two points) rather than a point and a dimension
> (width/height).
>- The representation is less susceptible to overflow. With x/y/w/h,
> computation
>  of x2/y2 can overflow for a large range of values of x/y and w/h.
> However,
>  with x1/y1/x2/y2, computation of w/h cannot overflow if the
> coordinates are
>  signed and the resulting w/h is unsigned.
>
> A known disadvantage of x1/y1/x2/y2 is that translating the rectangle
> requires translating both points, whereas translating x/y/w/h only
> requires translating one point. I think this disadvantage is minor in
> comparison to the above advantages.
>
> The proposed change would affect the class mozilla::gfx::BaseRect, and
> classes that derive from it (such as CSSRect, LayoutRect, etc., and,
> notably, nsRect and nsIntRect), but NOT other rectangle classes like
> DOMRect.
>
> I would like to propose making the transition as follows:
>
>- Replace direct accesses to the 'width' and 'height' fields
>>>
>>> throughout
>
>  the codebase with calls to getter and setter methods. (There
> aren't
>  that many - on the order of a few dozen, last I checked.)
>
>- Make the representation change, which is non-breaking now that
>  the direct accesses to 'width' and 'height' have been removed.
>
>- Examine remaining calls to the getters and setters for width and
>  height and see if any can be better expressed using operations
>  on the points instead.
>
> The Graphics team, which owns this code, is supportive of this change.
> However, since this is a fundamental utility type that's used by a
> variety of platform code, I would like to ask the wider platform
> development community for feedback before proceeding. Please let me
> know if you have any!
>
> Thanks,
> Botond
>
> [1]
> http://searchfox.org/mozilla-central/rev/672c83ed65da286b68be1d02799c35
>>>
>>> fdd14d0134/gfx/2d/BaseRect.h#46
>
> ___
> dev-platform mailing list

Re: Intent to deprecate: Building mozilla-central without Skia enabled

2016-08-23 Thread Jeff Gilbert
Agreed.

On Mon, Aug 22, 2016 at 11:08 AM, George Wright  wrote:
> I'm looking to remove the option to build mozilla-central without Skia.
>
> Currently we default to Skia being disabled, and enable it using
> --enable-skia. This is problematic because all of our officially supported
> build configurations enable Skia, and as such the Skia-is-disabled build has
> been buggy for quite some time. This has manifested most recently in bug
> 1296524.
>
> In the past year, Skia has moved from being an optional part of our graphics
> code to being a core, indispensable module. We are actively pursuing making
> it the default backend in almost all canvas and content configurations; it
> is already the default software canvas option on all platforms and will soon
> be for all software-backed content as well.
>
> To this end, I'd like to remove support for building mozilla-central without
> Skia as I think it should now be considered an integral part of our code.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Gilbert
They're basically out of stock now, but if you can find them, old
refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
have clang builds down to 5.5. As the v2s leave warranty, similar
machines may hit the market again.

I'm interested to find out how the new Ryzen chips do. It should fit
their niche well. I have one at home now, so I'll test when I get a
chance.

On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
 wrote:
> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>
>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>> >
>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>> >
>> > 24s here. So faster link times and significantly faster clobber times. I'm
>> > sold!
>> >
>> > Any motherboard recommendations? If we want developers to use machines
>> > like this, maintaining a current config in ServiceNow would probably
>> > help.
>>
>>
>> Until the ServiceNow catalog is updated...
>>
>> The Lenovo ThinkStation P710 is a good starting point (
>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>> From the default config:
>>
>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>
>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>> (plus/minus a few hundred depending on configuration specific).
>>
>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>> is absurd (costs >$2000 more when fully configured). Lenovo's
>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>> somewhere will sell the same thing for cheaper.
>>
>> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
>> got the
>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>> 512 GB SSD, the price was very reasonable compared to similar
>> configurations at Dell, HP, others.
>>
>> The just-released Broadwell-E processors with 6-10 cores are also nice
>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>> have no links to share. They should be <$2600 fully configured. That's a
>> good price point between the i7-6700K and a dual socket Xeon. Although if
>> you do lots of C++ compiling, you should get the dual socket Xeons (unless
>> you have access to more cores in an office or a remote machine).
>
>  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
>  hdd for $1525 cad so probably just under $1000 usd.  With just that
>  machine I can do a 10 minute linux debug build.  For less than the
>  price of the e3 machine quoted above I can buy 4 of those machines
>  which I expect would produce build times under 5:00.
>
> I believe with 32gb of ram there's enough fs cache disk performance
> doesn't actually matter, but it might be worth investigating moving a
> ssd to that machine at some point.
>
> So I would tend to conclude Xeons are not a great deal unless you really
> need to build for windows a lot before someone gets icecc working there.
>
> Trev
>
>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
>> out of the box. And, yes, you should use Windows 10 as your primary OS
>> because that's what our users mostly use. I run Hyper-V under Windows 10
>> and have at least 1 Linux VM running at all times. With 32 GB in the
>> system, there's plenty of RAM to go around and Linux performance under the
>> VM is excellent. It feels like I'm dual booting without the rebooting part.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please do NOT hand-edit web platform test MANIFEST.json files

2017-03-21 Thread Jeff Gilbert
JSON allows comments if all the JSON processors we use handle comments. :)

On Tue, Mar 21, 2017 at 8:52 AM, James Graham  wrote:
> On 20/03/17 22:15, gsquel...@mozilla.com wrote:
>
>> Sorry if it's a silly suggestion:
>> Could the current tool insert some helpful reminders *everywhere* in the
>> generated file (so it's can't be missed)?
>> E.g., every 2nd line would read: "// PSA: This file is auto-generated by
>> ./mach wpt-manifest-update, please don't edit!" ;-)
>
>
> It is of course possible but it's not trivial since JSON doesn't allow
> comments, so there would have to be some preprocessing magic and so on.
> Probably better to spend the time on a good solution.
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-04-20 Thread Jeff Gilbert
Can confirm for Ryzen: FWIW, my stock R7 1800x (RAM @2133Mhz for now
:( ) did a Windows debug clobber in 15:32.40 with the default -j16.
(any directory that showed up as read by MsMpEng in Resource Monitor
excluded for Defender)

I'll give it a run through with Defender disabled, and see if there's
any change.

On Tue, Apr 18, 2017 at 2:17 AM, Marco Bonardo  wrote:
> On Fri, Mar 17, 2017 at 7:20 PM, Ben Kelly  wrote:
>
>> On Fri, Mar 17, 2017 at 1:36 PM, Ted Mielczarek 
>> wrote:
>>
>> With defender
>> disabled the best I can get is 18min.  This is on one of the new lenovo
>> p710 machines with 16 xeon cores.
>>
>
> Just to add one datapoint, I just replaced my desktop with an AMD Ryzen
> 1700x (OC @3.8GHz) and I can clobber in less than 18mins on something that
> may be cheaper than those XEON machines (an OC 1700 can probably do the
> same). I'm currently using -J20, I didn't yet try to bump that up to see
> what's the limit, I'm positive I can get better times out of this machine,
> but I need to find some time for testing.
> I'm using Panda Free, with the SSD containing the sources in its exclusion
> list, didn't do anything to Defender (but I guess it's disabled by having
> another AV in the system).
> You can also disable Defender with a GPO, by setting "Turn off Windows
> Defender" to Enabled.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: `else for` or `else { for... }`?

2017-08-30 Thread Jeff Gilbert
IMO: Never else-for. (or else-while)

Else-if is a reasonable continuation of concept: "Well it wasn't that,
what if it's this instead?"
Else-for is just shorthand for "well it wasn't that, so let's loop
over something".

Else-if is generally used for chaining, often effectively as an
advanced switch/match statement.

I've never actually seen else-for, but I imagine it would be used for:
if (!foo) {
} else for (i = 0; i < foo->bar; i++) {
}

I don't think this pattern has enough value to be acceptable as
shorthand. BUT, I didn't know it was a thing until now, so I'm
naturally biased against it.

On Wed, Aug 30, 2017 at 2:58 PM,   wrote:
> Let's keep the flames alive!
>
> Should we always put braces after an `else`, with the only exception being 
> another `if`?
> Or should we also have exceptions for the other control structures (while, 
> do, for, switch)?
>
> A.
> if (...) {
>   ...
> } else {
>   for (...) {
> ...
>   }
> }
>
> B.
> if (...) {
>   ...
> } else for (...) {
>   ...
> }
>
> I can see arguments for both, so I'm not too sure which one should 
> win. :-)
> WDYT?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Follow up on clang-format

2017-09-29 Thread Jeff Gilbert
Now instead we will get to try to phrase code in a way that
clang-format will preserve readably?

I should think "it doesn't always produce nice formatting, but it's at
least consistent" sounds like a major warning sign, particularly given
that we definitely have tricky code for which we have made efforts to
maximize readability and maintainability.

Maybe this is more useful for our more poorly manicured modules, but
that it will drag down well-maintained code in favor of a
lowest-common-denominator style is absolutely concerning to me.

On Thu, Sep 28, 2017 at 1:15 AM, Chris Pearce  wrote:
> On Friday, May 23, 2014 at 3:29:48 AM UTC+2, Anthony Jones wrote:
>> Some of you may remember the discussion on clang-format and the `mach
>> clang-format` command. What we have in place right now is very temporary
>> but it is functional enough to give it a try. I have not put the effort
>> into upstreaming my changes. Depending on the feedback I receive I will
>> either:
>>
>> * Finish my existing changes and upstream them
>> * Remove the `mach clang-format` command altogether
>> * Do nothing
>>
>> I have personally found it useful. However I would like to hear from
>> other people who have tried it to help me decide what to do next.
>>
>> Anthony
>
> I use `./mach clang-format` on basically every patch I write. It has 
> eliminated the style nits that get picked up when people review my patches, 
> at least when I remember to run it!
>
> I've considered writing a commit hook to run `./mach clang-format` before 
> every commit, but haven't gotten around to it...
>
> `mach clang-format` is awesome, please don't remove it.
>
> As others have pointed out, it doesn't always produce nice formatting, but 
> it's at least consistent, and enforces the major style guide recommendations.
>
> It's also liberating not having to waste brain power deciding how to format 
> code.
>
>
> cpearce.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-27 Thread Jeff Gilbert
ranged-for issues are the same as those for doing manual iteration,
and your complaints about auto are caused by deficient code/design
review.

The further we deviate from standards, the harder it is for
contributors (and not just new ones) to do the right thing. The
default should be to move towards the standard, not cringe back from
it. Sure, prove it out. But we really don't need more moz-specific
constructs. We should choose our deviations carefully.

On Mon, Nov 27, 2017 at 3:24 AM, smaug  wrote:
> On 11/27/2017 01:05 PM, Nicolas B. Pierron wrote:
>>
>> On 11/26/2017 12:45 AM, smaug wrote:
>>>
>>> On 11/24/2017 06:35 PM, Eric Rescorla wrote:

 On Thu, Nov 23, 2017 at 4:00 PM, smaug  wrote:

> On 11/23/2017 11:54 PM, Botond Ballo wrote:
>
>> I think it makes sense to hide a 'new' call in a Make* function when
>> you're writing an abstraction that handles allocation *and*
>> deallocation.
>>
>> So MakeUnique makes sense, because UniquePtr takes ownership of the
>> allocated object, and will deallocate it in the destructor. MakeRefPtr
>> would make sense for the same reason.
>>
> I almost agree with this, but, all these Make* variants hide the
> information that they are allocating,
> and since allocation is slow, it is usually better to know when
> allocation
> happens.
> I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear
> about
> the functionality.
>

 This seems like a reasonable argument in isolation, but I think it's
 more
 important to mirror the standard C++ mechanisms and C++-14 already
 defines
 std::make_unique.
>>>
>>>
>>> Is it? Isn't it more important to write less error prone code and code
>>> where the reader
>>> sees the possible performance issues.
>>> Recent-ish C++ additions have brought quite a few bad concepts which we
>>> shouldn't use
>>> without considering whether they actually fit into our needs.
>>> Something being new and hip and cool doesn't mean it is actually a good
>>> thing.
>>
>>
>> One example, is that the standard has a single std::function type which
>> always allocate, and does not report failures because we disable exceptions.
>>
>> In this particular case I would recommend to have a stack-function wrapper
>> and a heap-function wrapper. By diverging from the standard we could avoid
>> bugs such as refusing stack-references for lambda being wrapped in
>> heap-function (use-after-free/unwind), and adding extra checks that
>> stack-function
>> wrapper can only be fields of stack-only classes.  C++ compilers have no
>> way to understand the code enough to provide similar life-time errors.
>>
>>> (This broken record keeps reminding about the security issues and memory
>>> leaks auto and ranged-for have caused.)
>>
>>
>> Do you have pointers? Is this basically when we have a non-standard
>> implementation of begin/end methods which are doing allocation?
>
>
> ranged-for causes issues when you modify the data structure while iterating.
> This used to be bad in nsTArray case at least, now we just crash safely.
>
>
> And auto hides the type, so reading the code and understanding lifetime
> management becomes harder.
> This has caused security sensitive crashes and leaks. (And there was some
> other type of error too recently, but it escapes my mind now)
>
>
>>
>>>

 As a post-script, given that we now can use C++-14, can we globally
 replace
 the MFBT clones of C++-14 mechanisms with the standard ones?

 -Ekr


>
>
> -Olli
>
>
>
>> But in cases where a library facility is not taking ownership of the
>> object, and thus the user will need to write an explicit 'delete', it
>> makes sense to require that the user also write an explicit 'new', for
>> symmetry.
>>
>> NotNull is a bit of a weird case because it can wrap either a raw
>> pointer or a smart pointer, so it doesn't clearly fit into either
>> category. Perhaps it would make sense for MakeNotNull to only be
>> usable with smart pointers?
>>
>> Cheers,
>> Botond
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>>>
>>
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-23 Thread Jeff Gilbert
I agree that MakeNotNull doesn't sound like it allocates. Reads to me
as Make[Into]NotNull. Context will *usually* make this clear, but why
not NewNotNull? (Honestly I don't like MakeUnique to begin with)

NotNull::New(args...) is another option.

"My first goal was to avoid the unnecessary nullptr-check in the
NotNull constructor, since our new is infallible by default."
Even if we can't communicate to the optimizer that new is infallible
and incur it to omit the branch in this inlined code, it would be
extremely well predicted.
MOZ_UNLIKELY() it and call it good, IMO.

On Thu, Nov 23, 2017 at 12:58 AM,   wrote:
> I'm not sure what the benefits are (the MakeUnique comments only really seem 
> to give aesthetic ones), but they do make it a bit harder when searching 
> through code to see where things are constructed.
>
> I suppose if we always stick to using Make* then (with regex) it wouldn't add 
> too much burden for searching.
>
> On Thursday, 23 November 2017 07:50:27 UTC, gsqu...@mozilla.com  wrote:
>> Should we allow hiding 'new' statements, or keep them as visible as possible?
>>
>>
>> Some context:
>> Recently in bug 1410252 I added a MakeNotNull(args...) function that 
>> does `NotNull(new T(args...))`, in the style of MakeUnique and others. 
>> It also works with RefPtr.
>>
>> My first goal was to avoid the unnecessary nullptr-check in the NotNull 
>> constructor, since our new is infallible by default.
>>
>> And I thought that hiding the naked new statement was a nice side benefit, 
>> as I was under the impression that it was A Good Thing in modern C++. 
>> (Though I think the main reason for that, was to prevent leaks when handling 
>> exceptions in multi-allocation expressions, so it doesn't apply to us here.)
>>
>> Now, a colleague remarked that this was making the memory allocation less 
>> obvious.
>> "What about MakeUnique?" I asked; "I'm used to them now" he retorted. :-P
>>
>>
>> So, what do you all think?
>> - Should I remove MakeNotNull?
>> - Or should we allow/encourage more MakeX functions instead of X(new...)? 
>> I'm thinking MakeRefPtr might be nice.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-28 Thread Jeff Gilbert
const auto& foo = getFoo();
foo->bar(); // foo is always safe here

Use of auto instead of auto& should be less common. I only use it for:
Casts: `const auto foo = static_cast(bar);`
Or long (but obvious) types I need a value of, usually RefPtrs of long types.

Almost every other auto is `const auto&`, which has quite good and
predictable behavior: It names a temporary and extends its lifetime to
the current scope.


As for ranged-for:
For one, range-based iteration is equivalent to normal iterator
iteration, which is required for traversal of non-linear containers.
For two, modifying a container while doing normal index iteration
still causes buggy behavior, so it can't be a panacea. (particularly
if you hoist the size!)

  for (auto foo : myFoos) {
foo->bar();
  }
This is always safe if decltype(foo) is a strong ref. If foo->bar()
can mutate the lifetime of foo, of course you must take a strong
reference to foo. Nothing about auto or range-for changes this. If
foo->bar() can mutate myFoos, even index iteration is likely to have
bugs.


If you don't understand your lifetimes, you get bugs. Keeping to C++03
doesn't save you. Keeping to c++03 just means you only get the c++03
versions of these bugs. (or in our case, ns/moz-prefixed versions of
these bugs)

If you're reviewing code and can't figure out what the lifetimes are:
R-. I know well that sometimes this is the hardest part of review, but
it's a required and important part of the review, and sometimes the
most important precisely because we can't always use a
sufficiently-restricted set of constructs.

Review is sign-off about "I understand this and it's good (enough)".
If you have particular module-specific issues where
otherwise-acceptable constructs are particularly problematic, sure,
review appropriately. In my modules, these constructs are fine.
Perhaps this is because we have a more-constrained problem space than
say dom/html.

TL;DR:
If you have reason to ban a construct for your module: Cool and normal.
But the bar for banning a construct for all modules must be higher than that.
A mismatch for one module is not sufficient reason to ban it in general.

PS: If any reviewers need a crash course in any of these new concepts,
I'm more than happy to set up office hours at the All-Hands.

On Tue, Nov 28, 2017 at 1:35 PM, smaug  wrote:
> On 11/28/2017 06:33 AM, Boris Zbarsky wrote:
>>
>> On 11/27/17 7:45 PM, Eric Rescorla wrote:
>>>
>>> As for the lifetime question, can you elaborate on the scenario you are
>>> concerned about.
>>
>>
>> Olli may have a different concern, but I'm thinking something like this:
>>
>>   for (auto foo : myFoos) {
>> foo->bar();
>>   }
>>
>
> That was pretty much what I had in mind.
> Though, using auto without range-for, so just
> auto foo = getFoo();
> foo->bar(); // is this safe?
>
>
>
>
>> where bar() can run arbitrary script.  Is "foo" held alive across that
>> call?  Who knows; you have to go read the definition of the iterators on the
>> type of myFoos to find out.
>>
>> One possible answer is that the right solution for this type of issue is
>> the MOZ_CAN_RUN_SCRIPT static analysis annotation on bar(), which will make
>> this code not compile if the type of "foo" is a raw pointer But this
>> annotation is only added to a few functions in our codebase so far, and
>> we'll
>> see how well we manage at adding it to more.  We have a _lot_ of stuff in
>> our codebase that can run random script.  :(
>>
>> -Boris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-13 Thread Jeff Gilbert
As long as we follow PEP 394, I'm super excited. (including on our
mozilla-build windows system, which counts as 'unix-like')

On Sun, Nov 12, 2017 at 9:12 AM, David Burns  wrote:
> I am not saying it should but if we have a requirement for python 3, we are
> also going to have a requirement for py2 to both be available for local
> development.
>
> David
>
> On 11 November 2017 at 14:10, Andrew Halberstadt 
> wrote:
>
>> On Fri, Nov 10, 2017 at 9:44 PM David Burns  wrote:
>>
>>> My only concern about this is how local developer environments are going
>>> to be when it comes to testing. While I am sympathetic to moving to python
>>> 3 we need to make sure that all the test harnesses have been moved over and
>>> this is something that needs a bit of coordination. Luckily a lot of the
>>> mozbase stuff is already moving to python 3 support but that still means we
>>> need to have web servers and the actual test runners moved over too.
>>>
>>> David
>>>
>>
>> For libraries like mozbase, I think the plan will be to support both 2 and
>> 3 at
>> the same time. There are libraries (like 'six') that make this possible.
>> I'd bet
>> there are even parts of the build system that will still need to support
>> both at
>> the same time.
>>
>> With that in mind, I don't think python 3 support for test harnesses needs
>> to
>> block the build system.
>>
>> Andrew
>>
>> On 10 November 2017 at 23:27, Gregory Szorc  wrote:
>>>
 For reasons outlined at https://bugzilla.mozilla.org/
 show_bug.cgi?id=1388447#c7, we would like to make Python 3 a
 requirement to build Firefox sometime in the Firefox 59 development cycle.
 (Firefox 59 will be an ESR release.)

 The requirement will likely be Python 3.5+. Although I would love to
 make that 3.6 if possible so we can fully harness modern features and
 performance.

 I would love to hear feedback - positive or negative - from downstream
 packagers and users of various operating systems and distributions about
 this proposal.

 Please send comments to dev-bui...@lists.mozilla.org or leave them on
 bug 1388447.

 ___
 dev-builds mailing list
 dev-bui...@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-builds

 ___
>>> dev-builds mailing list
>>> dev-bui...@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-builds
>>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Jeff Gilbert
Price matters, since every dollar we spend chasing ECC would be a
dollar we can't allocate towards perf improvements, hardware refresh
rate, or simply more machines for any build clusters we may want.

The paper linked above addresses massive compute clusters, which seems
to have limited implications for our use-cases.

Nearly every machine we do development on does not currently use ECC.
I don't see why that should change now. To me, ECC for desktop compute
workloads crosses the line into jumping at shadows, since "restart
your machine slightly more often than otherwise" is not onerous.

On Mon, Nov 6, 2017 at 9:19 AM, Gregory Szorc <gsz...@mozilla.com> wrote:
>
>
>> On Nov 6, 2017, at 05:19, Gabriele Svelto <gsve...@mozilla.com> wrote:
>>
>>> On 04/11/2017 01:10, Jeff Gilbert wrote:
>>> Clock speed and core count matter much more than ECC. I wouldn't chase
>>> ECC support for general dev machines.
>>
>> The Xeon-W SKUs I posted in the previous thread all had identical or
>> higher clock speeds than equivalent Core i9 SKUs and ECC support with
>> the sole exception of the i9-7980XE which has slightly higher (100MHz)
>> peak turbo clock than the Xeon W-2195.
>>
>> There is IMHO no performance-related reason to skimp on ECC support
>> especially for machines that will sport a significant amount of memory.
>>
>> Importance of ECC memory is IMHO underestimated mostly because it's not
>> common and thus users do not realize they may be hitting memory errors
>> more frequently than they realize. My main workstation is now 5 years
>> old and has accumulated 24 memory errors; that may not seem much but if
>> it happens at a bad time, or in a bad place, they can ruin your day or
>> permanently corrupt your data.
>>
>> As another example of ECC importance my laptop (obviously) doesn't have
>> ECC support and two years ago had a single bit that went bad in the
>> second DIMM. The issue manifested itself as internal compiler errors
>> while building Fennec. The first time I just pulled again from central
>> thinking it was a fluke, the second I updated the build dependencies
>> which I hadn't done in a while thinking that an old GCC might have been
>> the cause. It was not until the third day with a failure that I realized
>> what was happening. A 2-hours long memory test showed me the second DIMM
>> was bad so I removed it, ordered a new one and went on to check my
>> machine. I had to purge my compilation cache because garbage had
>> accumulated in there, run an hg verify on my repo as well as verifying
>> all the installed packages for errors. Since I didn't have access to my
>> main workstation at the time I had wasted 3 days chasing the issue and
>> my workflow was slowed down by a cold compilation cache and a gimped
>> machine (until I could replace the DIMM).
>>
>> This is not common, but it's not rare either and we now have hundreds of
>> developers within Mozilla so people are going to run into issues that
>> can be easily prevented by having ECC memory.
>>
>> That being said ECC memory also makes machines less susceptible to
>> Rowhammer-like attacks and makes them detectable while they are happening.
>>
>> For a more in-depth reading on the matter I suggest reading "Memory
>> Errors in Modern Systems - The Good, The Bad, and The Ugly" [1] in which
>> the authors analyze memory errors on live systems over two years and
>> argue that SEC-DED ECC (the type of protection you usually get on
>> workstations) is often insufficient and even chipkill ECC (now common on
>> most servers) is not enough to catch all errors happening during real
>> world use.
>>
>> Gabriele
>>
>> [1] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf
>>
>
> The Xeon-W’s are basically the i9’s (both Skylake-X) with support for ECC, 
> more vPRO, and AMT. The Xeon-W’s lack Turbo 3.0 (preferred core). However, 
> Turbo 2.0 apparently reaches the same MHz, so I don’t think it matters much. 
> There are some other differences with regards to PCIe lanes, chipset, etc.
>
> Another big difference is price. The Xeon’s cost a lot more.
>
> For building Firefox, the i9’s and Xeon-W are probably very similar (and is 
> something we should test). It likely comes down to whether you want to pay a 
> premium for ECC and other Xeon-W features. I’m not in a position to answer 
> that.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Jeff Gilbert
My understanding of current policy is that ECC is not required. (and
not even an option with MacBook Pros) Given the volume of development
that happens unhindered on our developers' many, many non-ECC
machines, I believe the burden of proof-of-burden is on the pro-ECC
argument to show that it's likely to be a worthwhile investment for
our use-cases.

As for evidence for lack of ECC being a non-issue, I call to witness
the vast majority of Firefox development, most applicably that portion
done in the last ten years, and especially all MacOS development
excluding the very few Mac Pros we have.

If we've given developers ECC machines already when non-ECC was an
option, absent a positive request for ECC from the developer, I would
consider this to have been a minor mistake.

On Mon, Nov 6, 2017 at 3:03 PM, Gabriele Svelto <gsve...@mozilla.com> wrote:
> On 06/11/2017 22:44, Jeff Gilbert wrote:
>> Price matters, since every dollar we spend chasing ECC would be a
>> dollar we can't allocate towards perf improvements, hardware refresh
>> rate, or simply more machines for any build clusters we may want.
>
> And every day our developers or IT staff waste chasing apparently random
> issues is a waste of both money and time.
>
>> The paper linked above addresses massive compute clusters, which seems
>> to have limited implications for our use-cases.
>
> The clusters are 6000 and 8500 nodes respectively, quite small by
> today's standards. How many developers do we have? Hundreds for sure, it
> could be a thousand looking at our current headcount so we're in the
> same ballpark.
>
>> Nearly every machine we do development on does not currently use ECC.
>> I don't see why that should change now.
>
> Not true. The current Xeon E5-based ThinkStation P710 available from
> Service Now has ECC memory and so did the previous models in the last
> five years. Having a workstation available w/o ECC would actually be a
> step backwards.
>
>> To me, ECC for desktop compute
>> workloads crosses the line into jumping at shadows, since "restart
>> your machine slightly more often than otherwise" is not onerous.
> Do you have data to prove that this is not an issue?
>
>  Gabriele
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Gilbert
If you don't want to get into the weeds on ECC again, please do not
reinitiate discussion. I do not agree that "the additional cost of ECC
is very low compared to the cost of developer time over the two years
that they're expected to use it", but I will restrict my disagreement
to the forked thread that you created. Please repost there.

On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione  wrote:
> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>>
>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>> wrote:
>>>
>>> Hi All,
>>>
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>>
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>
>>
>> What's the advantage of having a "one size fits all" machine? I
>> imagine there's quite a range of uses and preferences for these
>> machines. e.g some people are going to be spending more time waiting
>> for a single core and so would prefer a smaller core count and higher
>> clock, other people want a machine that's as wide as possible. Some
>> people would value performance over correctness and so would likely
>> not want ECC. etc. I've heard a number of horror stories of people
>> ending up with hardware that's not well suited to their tasks just
>> because that was the only hardware on the list.
>
>
> High core count Xeons will divert power from idle cores to increase the
> clock speed of saturated cores during mostly single-threaded workloads.
>
> The advantage of a one-size-fits-all machine is that it means more of us
> have the same hardware configuration, which means fewer of us running into
> independent issues, more of us being able to share software configurations
> that work well, easier purchasing and stocking of upgrades and accessories,
> ... I own a personal high-end Xeon workstation, and if every developer at
> the company had to go through the same teething and configuration troubles
> that I did while breaking it in, we would not be in a good place.
>
> And I don't really want to get into the weeds on ECC again, but the
> performance of load-reduced ECC is quite good, and the additional cost of
> ECC is very low compared to the cost of developer time over the two years
> that they're expected to use it.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-03 Thread Jeff Gilbert
Clock speed and core count matter much more than ECC. I wouldn't chase
ECC support for general dev machines.

On Thu, Nov 2, 2017 at 6:46 PM, Gregory Szorc  wrote:
> On Thu, Nov 2, 2017 at 3:43 PM, Nico Grunbaum  wrote:
>
>> For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for building
>> I use icecc to distribute the load (or rather I will be again when bug
>> 1412240[0] is closed).  The i9 series has lower base clocks (2.8 Ghz, and
>> 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 Ghz.  If I were
>> to switch over to an i9 for everything, would I see a notable difference in
>> performance in rr?
>>
>
> Which i7? You should get better CPU efficiency with newer
> microarchitectures. The i9's we're talking about are based on Skylake-X
> which is based on Skylake which are the i7-6XXX models in the consumer
> lines. It isn't enough to compare MHz: you need to also consider
> microarchitectures, memory, and workload.
>
> https://arstechnica.com/gadgets/2017/09/intel-core-i9-7960x-review/2/ has
> some single-threaded benchmarks. The i7-7700K (Kaby Lake) seems to "win"
> for single-threaded performance. But the i9's aren't far behind. Not far
> enough behind to cancel out the benefits of the extra cores IMO.
>
> This is because the i9's are pretty aggressive about using turbo. More
> aggressive than the Xeons. As long as cooling can keep up, the top-end GHz
> is great and you aren't sacrificing that much perf to have more cores on
> die. You can counter by arguing that the consumer-grade i7's can yield more
> speedups via overclocking. But for enterprise uses, having this all built
> into the chip so it "just works" without voiding warranty is a nice trait :)
>
> FWIW, the choice to go with Xeons always bothered me because we had to make
> an explicit clock vs core trade-off. Building Firefox requires both many
> cores for compiling and fast cores for linking. Since the i9's turbo so
> well, we get the best of both worlds. And at a much lower price. Aside from
> the loss of ECC, it is a pretty easy decision to switch.
>
>
>> -Nico
>>
>> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure in
>> libavutil (missing atomic definitions), when building with clang and icecc
>>
>> [1] https://ark.intel.com/products/series/123588/Intel-Core-X-
>> series-Processors
>>
>> On 10/27/17 7:50 PM, Robert O'Callahan wrote:
>>
>>> BTW can someone forward this entire thread to their friends at AMD so AMD
>>> will fix their CPUs to run rr? They're tantalizingly close :-/.
>>>
>>> Rob
>>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal: WebIDL generated bindings classes renamed from dom::FooBinding to dom::binding::Foo

2018-06-07 Thread Jeff Gilbert
We have a name conflict under the current system when trying to use
these two classes from the webgpu sketch webidl:
- WebGPUBuffer
- WebGPUBufferBinding

I'm proposing renaming the generated bindings classes from
dom::FooBinding to dom::binding::Foo. This seems like a reasonable use
of namespaces, and will be immune from naming collisions coming from
webidl.

Also considered was dom::Foo_Binding. (webidl really shouldn't have
underscores, but technically could!) Talking with bz,
dom::bindings::Foo sounds better to us, though.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-29 Thread Jeff Gilbert
It would be sad to see us standardize on a clang monoculture.

On Tue, May 1, 2018 at 7:56 PM, Anthony Jones  wrote:
> You may already know that the Low-Level Tools team support important tools
> and code infrastructure. Lately we’ve also been improving our rustc/clang
> (LLVM) story and I’d like bring everyone up to date.
>
>
> There are a lot of important and interesting things going on:
>
>
> Michael Woerister and Nathan Froyd recently (mid-March) enabled Rust
> incremental compilation for Firefox developers
>
> Michael is experimenting with cross language inlining (rustc/clang) using
> LTO
>
> Preliminary results show compiling LLVM with clang and using LTO on rustc
> improves stylo compilation time by around 15%
>
> LTO requires more work to support Firefox
>
> Nick Nethercote has started speeding up rustc and has already had a number
> of wins
>
> David Major has got Windows clang builds working with green tests
>
> LTO is still buggy (but works well enough to run benchmarks) and PGO won’t
> run on Windows
>
> Win32 - clang with LTO w/o PGO is marginally faster than MSVC with LTO/PGO
>
> Win64 - clang with LTO w/o PGO is ~5-10% slower than MSVC with LTO/PGO
>
> Windows users can build locally with clang
>
> Mike Hommey is tracking improvements and regressions in the Rust compiler
> (performance regression, crash reporting and more crash reporting)
>
> Tom Tromey is continuing to work on first class Rust support in GDB and LLDB
>
>
> Ultimately, I’d like to see us standardise on clang on all platforms because
> it makes Rust/C++ integration better as well as making things simpler for
> developers and build system maintainers. We’ll get more done if we make our
> own lives simpler.
>
>
> I have some specific requests:
>
>
> Let me know if you have specific Firefox cases where Rust is slowing you
> down
>
> Cross language inlining is coming - avoid duplication between Rust and C++
>
> Start doing your developer builds with clang
>
> Anthony
>
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-29 Thread Jeff Gilbert
I get that, but it reminds me of the reasons people give for "our
website works best in $browser".

There are also other less-obvious benefits where having multiple
backends can illuminate bugs and deviations from standards, as well as
having another set of warnings and static analysis passes. Once we go
monoculture, it's also really hard to go back, since non-portable
dependencies (both deliberate and accidental) start to pile up.

Now we could deliberately retain these advantages by keeping non-clang
compilers at at least tier-2, while leveraging clang where useful.
(looks like not win64 yet perf-wise, anyways) It would be nice to
establish our plan here.

On Tue, May 29, 2018 at 9:10 PM, Anthony Jones  wrote:
> On Wednesday, 30 May 2018 08:48:12 UTC+12, Jeff Gilbert  wrote:
>> It would be sad to see us standardize on a clang monoculture.
>
> It pays not to be sentimental about tools. We get better practicality and 
> productivity by focusing on clang. Using the same compiler across all 
> platforms means that we can do more with less effort.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-06-29 Thread Jeff Gilbert
I don't believe e-prefix adds sufficient value.
Mutable statics *must* always be s-prefixed. If this is not the case I
will gladly bring the codebase into compliance.

On Fri, Jun 29, 2018 at 11:36 AM, smaug  wrote:
> On 06/29/2018 05:58 PM, Boris Zbarsky wrote:
>>
>> On 6/29/18 10:30 AM, Nathan Froyd wrote:
>>>
>>> Given the language-required qualification for
>>> `enum class` and a more Rust-alike syntax, I would feel comfortable
>>> with saying CamelCase `enum class` is the way to go.
>>
>>
>> For what it's worth, I agree.  The point of the "e" prefix is to highlight
>> that you have an enumeration and add some poor-man's namespacing for a
>> potentially large number of common-looking names, and the language-required
>> qualification already handles both of those.
>>
>
> That works if we're consistently naming static variables and such using
> s-prefix etc, so that
> class Foo1
> {
>   static int sBar;
> };
> Foo1::sBar
>
> is clearly different to
> enum class Foo2
> {
>   Bar
> };
>
> Foo2::Bar
>
>
> (personally I'd still prefer using e-prefix always with enums, class or not.
> Foo1::sBar vs Foo2::eBar is easier to distinguish than Foo1::sBar vs
> Foo2::Bar)
>
> -Olli
>
>
>
>
>> -Boris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-06-25 Thread Jeff Gilbert
If we can't agree, it shouldn't be in the style guide.

On Mon, Jun 25, 2018 at 2:17 PM, Peter Saint-Andre  wrote:
> And perhaps good reason for removing it from the style guide? ;-)
>
> On 6/25/18 3:08 PM, Emilio Cobos Álvarez wrote:
>> And Kris pointed out that we already had another huge thread on this:
>>
>>
>> https://groups.google.com/d/msg/mozilla.dev.platform/WAuySoTfq_w/-DggRotpBQAJ
>>
>>
>> Looks like there wasn't agreement on that one... But oh well, don't want
>> to repeat a lot of that discussion.
>>
>> I think the argument for consistency with the other systems language we
>> have in-tree, the fact that it's not predominant (at least for enum
>> classes) even though it is in the coding style, and that there wasn't
>> agreement in the previous thread are good reasons for not enforcing it,
>> but...
>>
>>  -- Emilio
>>
>> On 6/25/18 10:41 PM, Emilio Cobos Álvarez wrote:
>>> Our coding style states that we should use an `e` prefix for enum
>>> variants, that is:
>>>
>>>enum class Foo { eBar, eBaz };
>>>
>>> We're not really consistent about it: looking at layout/, we mostly
>>> use CamelCase, though we do have some prefixed enums. Looking at other
>>> modules, enum classes almost never use it either. DOM bindings also
>>> don't use that prefix.
>>>
>>> I think that with enum classes the usefulness of the prefix is less
>>> justified. Plus removing them would allow us to match the Rust coding
>>> style as well, which is nice IMO.
>>>
>>> Would anybody object to making the prefix non-mandatory, removing that
>>> line from the coding style doc? Maybe only making it non-mandatory for
>>> enum classes?
>>>
>>> Thanks,
>>>
>>>   -- Emilio
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming C++ standards meeting in Rapperswil, Switzerland

2018-06-20 Thread Jeff Gilbert
I agree with the sentiment in the write-up, that ideally beginner 2d
graphics needs are better handled by improving library/packaging
support, not by importing (and speccing??) an ancient library. For
instance in JS, people are more likely to pull in a library (of many)
to help with graphing, despite the availability of a standard 2D
drawing API.

On Wed, Jun 20, 2018 at 1:37 PM, Eric Shepherd (Sheppy)
 wrote:
> Thanks for sharing that overview!
>
> Although I can see why there's a lot of resistance to adding a graphics
> library to the C++ standard, it seems to me like a good idea. Even though,
> yes, there are going to be better and faster libraries out there, it's also
> true that anyone looking to maximize performance is going to be bypassing
> parts of the standard runtime anyway. Even text-based applications often
> bypass the standard I/O library to do their output more directly to the
> console (as I think back to my days of directly blasting characters into
> the text screen's video memory). Math routines are often replaced with
> custom libraries that do things faster, either using better algorithms,
> making use of processor features not directly supported by the standard
> library, or by making accuracy concessions when all you need are "good
> enough" values.
>
> A standard graphics library would serve to make it easier for new
> programers to onboard, would be great for quick tools that graph the
> results of a problem being solved, or even many simple or low-intensity
> applications that simply don't need high-performance graphics.
>
> I do think that parts of the proposal could be broken off into separate
> components. The linear algebra stuff, along with geometry, could be split
> out into a separate proposal, since these have uses beyond 2D graphics.
> This would both help bring other capabilities to light sooner, but would
> lighten up the graphics proposal as well.
>
>
> On Wed, Jun 20, 2018 at 11:29 AM, Botond Ballo  wrote:
>
>> My blog post about what happened at this meeting is now live:
>>
>> https://botondballo.wordpress.com/2018/06/20/trip-report-c-
>> standards-meeting-in-rapperswil-june-2018/
>>
>> Cheers,
>> Botond
>>
>>
>> On Tue, May 22, 2018 at 6:35 PM, Botond Ballo  wrote:
>> > Hi everyone!
>> >
>> > The next meeting of the C++ Standards Committee will be June 4-9 in
>> > Rapperswil, Switzerland.
>> >
>> > This is going to be a pivotal meeting, with go / no-go decisions
>> > expected for several highly-anticipated C++20 features, including a
>> > subset of Modules; Coroutines; Ranges; and "natural syntax" Concepts /
>> > abbreviated function templates. A discussion of whether or not to
>> > continue efforts to standardize 2D Graphics is also scheduled (see
>> > arguments for [1] and against [2]). In addition, work will continue on
>> > various Technical Specifications that are in flight (including,
>> > notably, Reflection), and processing the large influx of new language
>> > and library feature proposals.
>> >
>> > If you're curious about the state of C++ standardization, I encourage
>> > you to check out my blog posts where I summarize each meeting in
>> > detail (most recent one here [3]), and the list of proposals being
>> > considered by the committee (new ones since the last meeting can be
>> > found here [4] and here [5]).
>> >
>> > I will be attending this meeting, hanging out in the Evolution Working
>> > Group (where new language features are discussed at the design level)
>> > as usual. As always, if there's anything you'd like me to find out for
>> > you at the meeting, or any feedback you'd like me to communicate,
>> > please let me know!
>> >
>> > Finally, I encourage you to reach out to me if you're thinking of
>> > submitting a proposal to the committee. I'm always happy to help with
>> > formulating and, if necessary, presenting a proposal.
>> >
>> > Cheers,
>> > Botond
>> >
>> >
>> > [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0988r0.pdf
>> > [2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1062r0.html
>> > [3] https://botondballo.wordpress.com/2018/03/28/trip-report-c-
>> standards-meeting-in-jacksonville-march-2018/
>> > [4] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/#
>> mailing2018-02
>> > [5] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/#
>> mailing2018-04
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
>
> --
>
> Eric Shepherd
> Senior Technical Writer
> Mozilla
> Blog: http://www.bitstampede.com/
> Twitter: http://twitter.com/sheppy
> Check my Availability 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org

Re: Hiding 'new' statements - Good or Evil?

2018-01-04 Thread Jeff Gilbert
I would say it comes down to the module. I'm very comfortable in my
modules using auto, perhaps because our lifetime management is more
clear. To me, the unsafe examples are pretty strawman-y, since it's
very clear that there are at-risk lifetimes involved. (Returning a
bare pointer and relying on it staying valid is bad; const auto& takes
a reference, not a strong reference. If you want a new strong
reference, you obviously can't just use a reference)

`auto` was formalized in c++11, and it is now 2018. C++11 has been
required to compile Firefox since Firefox 25 in mid 2013. The rules
for auto are pretty easy to understand, too. (strips the c/v/r
decorations from the returning type, so that `const int& bar(); auto
foo = bar();` gives `foo` type `int`) In my mind, pushing back on auto
here is effectively a statement that we never expect our reviewing
ability to expand to include any new constructs, because we might
potentially maybe use those new constructs wrong. Yes, we should
always carefully consider new things. No, we shouldn't use every
new-fangled bit of the language. But honestly if you're writing c++
without auto today, it's not modern c++.

If you think auto isn't right for your module, don't accept it. That
much is agreeable, I think. To my knowledge, we've been using it for
years and so far I've only run into one at-all-related issue with it,
which ended up being a miscompilation on gcc<7 that affects all
references (including non-auto) to member scalars.
(https://bugzilla.mozilla.org/show_bug.cgi?id=1329044)

On Thu, Jan 4, 2018 at 11:31 AM, Randell Jesup  wrote:
> [SNIP]
>>> If foo->bar() can mutate the lifetime of foo, of course you must take a 
>>> strong
>>> reference to foo. Nothing about auto or range-for changes this.
>>
>>What auto does is make it really hard to tell whether a strong reference is
>>being taken.
>>
>>> If you don't understand your lifetimes, you get bugs.
>>
>>Fully agreed.  The discussion is about whether auto helps or hinders that
>>understanding.  The answer is that it depends on the surrounding code, from
>>where I sit...
>>
>>-Boris
>
> So, where do we go from here?
>
> It's clear that auto is not as safe as some/many believe it to be; it
> can as Boris (and smaug and Xidorn) say hide lifetime issues and lead to
> non-obvious UAFs and the like.  Some uses are certainly safe, but it
> isn't always obvious looking at a patch (per above), requiring more code
> investigation by the reviewer.  If the resultant type isn't obvious,
> then using it isn't helping comprehension.  When reading the code in the
> future, if the reader has to do non-trivial investigation just to know
> what's going on, it's hurting our work.
>
> If the win is to avoid typing I'd say it's not worth the risk.  Or
> allow it, but only with commentary as to why it's safe, or with rules
> about what sort of types it's allowed with and anything other than those
> requires justification in comments/etc.
>
> --
> Randell Jesup, Mozilla Corp
> remove "news" for personal email
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jeff Gilbert
It's way cheaper to get build clusters rolling than to get beefy
hardware for every desk.
Distributed compilation or other direct build optimizations also allow
continued use of laptops for most devs, which definitely has value.

On Wed, Jan 17, 2018 at 11:22 AM, Jean-Yves Avenard
 wrote:
>
>
>> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
>>
>> Something simple with the jobserver logic might work here, but I think we
>> want to complete the long-term project of getting a complete dependency
>> graph available before looking at that kind of optimization.
>
> Just get every person needing to work on mac an iMac Pro, and those on 
> Windows/Linux a P710 or better and off we go.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: overly strict eslint rules

2017-12-27 Thread Jeff Gilbert
It's not just eslint either. We warn in static analysis now when using
`!std::vector::size()` instead of `empty()`. Such over-prescriptive
linting is unnecessary and unproductive.

On Sun, Dec 24, 2017 at 2:07 PM, Masatoshi Kimura  wrote:
>
> On 2017/12/25 6:31, Jonathan Kingston wrote:
>> It has worked for me for quite some time just running ./mach lint
>> filename.js after bootstrap.
>
> I got the following error when I tried it just now:
>
> ---
> $ mach lint browser/base/content/aboutDialog.js
> eslint-plugin-html v2.0.3 should be v4.0.0.
> ESLint is an old version, clobbering node_modules directory
> Clobbering node_modules...
> Installing eslint for mach using
> "d:\mozilla-build\node-v8.1.4-win-x64\npm.cmd install --loglevel=error"...
> npm ERR! code ENOLOCAL
> npm ERR! Could not install from
> "tools\lint\eslint\eslint-plugin-mozilla" as it does not contain a
> package.json file.
>
> npm ERR! A complete log of this run can be found in:
> npm ERR!
> C:\Users\\AppData\Roaming\npm-cache\_logs\2017-12-24T21_55_36_055Z-debug.log
>
> Error installing eslint, aborting.
> Could not find eslint!  We looked at the --binary option, at the ESLINT
> environment variable, and then at your local node_modules path. Please
> Install
> eslint and needed plugins with:
>
> mach eslint --setup
>
> and try again.
> A failure occured in the eslint linter.
> ? 1 problem (0 errors, 0 warnings, 1 failure)
> ---
>
> It is a really frustrating task to run eslint (and keep a working
> environment) on Windows.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-15 Thread Jeff Gilbert
Awesome!

What are the pros/cons to mozilla::Hash{Map,Set} vs
std::unordered_{map,set}? I'm assuming perf is the main draw? Do we
have a data on this too?

On Mon, Aug 13, 2018 at 10:44 PM, Nicholas Nethercote
 wrote:
> Hi,
>
> I recently moved Spidermonkey's js::Hash{Map,Set} classes from
> js/public/HashTable.h into mfbt/HashTable.h and renamed them as
> mozilla::Hash{Map,Set}. They can now be used throughout Gecko.
>
> (js/public/HashTable.h still exists in order to provide renamings of
> mozilla::Hash{Map,Set} as js::Hash{Map,Set}.)
>
> Why might you use mozilla::Hash{Map,Set} instead of PLDHashTable (or
> nsTHashtable and other descendants)?
>
> - Better API: these types provide proper HashMap and HashSet instances, and
> (in my opinion) are easier to use.
>
> - Speed: the microbenchmark in xpcom/rust/gtest/bench-collections/Bench.cpp
> shows that mozilla::HashSet is 2x or more faster than PLDHashTable. E.g.
> compare "MozHash" against "PLDHash" in this graph:
>
> https://treeherder.mozilla.org/perf.html#/graphs?timerange=604800=mozilla-central,1730159,1,6=mozilla-central,1730162,1,6=mozilla-central,1730164,1,6=mozilla-central,1732092,1,6=mozilla-central,1730163,1,6=mozilla-central,1730160,1,6
>
> Bug 1477627 converted a hot hash table from PLDHashTable to
> mozilla::HashSet and appears to have sped up cycle collection in some cases
> by 7%. If you know of another PLDHashTable that is hot, it might be worth
> converting it to mozilla::HashTable.
>
> Both mozilla::Hash{Map,Set} and PLDHashTable use the same double-hashing
> algorithm; the speed difference is due to mozilla::HashSet's extensive use
> of templating and inlining. The downside of this is that mozilla::HashSet
> will increase binary size more than PLDHashTable.
>
> There are overview comments at the top of mfbt/HashTable.h, and the classes
> themselves have more detailed comments about every method.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Workaround for building with Visual Studio 15.7+

2018-08-16 Thread Jeff Gilbert
FWIW, I found that building with clang-cl works fine out-of-the-box
with 15.7.x, and the Installer tells me that I don't have the 15.6
toolset installed. YMMV, but I've found that clang-cl builds Just Work
with 15.7.

On Thu, Aug 16, 2018 at 5:23 AM, Gabriele Svelto  wrote:
>  Hello all,
> being among those unfortunate enough to have updated Visual Studio
> before realizing that the most recent version cannot be used to build
> Firefox, I was faced with the prospect of reinstalling the whole
> monstrosity - which takes forever - or finding a workaround. As it turns
> out I found a clunky yet simple one:
>
> - Launch the Visual Studio Installer, the click on the "modify" button
>
> - Go into the "Individual components" and select the "VC++ 2017 version
> 15.6 v14.13 toolset" component
>
> - Launch the installation
>
> - Modify build/moz.configure/toolchain.configure by manually setting the
> tools_version string in [1] to the value '14.13.26128'
>
> - You're done! Both a clang-cl and a msvc should now pick up the old
> tooling and work properly (provided you have the right Windows SDK).
>
> I couldn't find a way to override this via mozconfig options or
> environment variables but if there is a way to do this without modifying
> the sources I'd like to hear it.
>
>  Gabriele
>
> [1]
> https://searchfox.org/mozilla-central/rev/5dbfd833bbb114afe758db4d4bdbc5b13bcc33ef/build/moz.configure/toolchain.configure#584
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Avoid Visual Studio 2017 15.7.0

2018-08-06 Thread Jeff Gilbert
Consider instead building with clang-cl:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Building_Firefox_on_Windows_with_clang-cl

If you build with clang-cl, you can keep the newest
(gecko-incompatible) version of MSVC installed, which is particularly
useful if you build other modern codebases.

On Fri, May 11, 2018 at 3:01 PM, Matthew N.  wrote:
> On 2018-05-08 6:40 a.m., Ryan VanderMeulen wrote:
>>
>> Yesterday, Microsoft released Visual Studio 2017 15.7.0. Unfortunately, it
>> is currently not usable for building Firefox due to bug 1458247 (internal
>> compiler errors in WebRTC code). The bug was already reported and
>> confirmed
>> upstream during the 15.7 preview cycle, but unfortunately the final
>> release
>> still shipped with the bug present.
>>
>> At this point, there are no workarounds available for this issue, so avoid
>> the update if you want to be able to continue building Firefox.
>
>
> If you need to update to 15.6 but hadn't yet, you can download old versions
> at
> https://docs.microsoft.com/en-us/visualstudio/productinfo/installing-an-earlier-release-of-vs2017
>
> - MattN
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Jeff Gilbert
Using a classic read/write exclusive lock, we would only every contend
on read+write or write+write, which are /rare/.

It's really, really nice when we can have dead-simple threadsafe APIs,
instead of requiring people to jump through hoops or roll their own
dispatch code. (fragile) IIRC most new APIs added to the web are
supposed to be available in Workers, so the need for reading prefs
off-main-thread is only set to grow.

I don't see how this can mutate into a foot-gun in ways that aren't
already the case today without off-main-thread access.

Anyway, I appreciate the work that's been done and is ongoing here. As
you burn down the pref accesses in start-up, please consider
unblocking this feature request. (Personally I'd just eat the 400us in
exchange for this simplifying architectural win)

On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione  wrote:
> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>
>> We should totally be able to afford the very low cost of a
>> rarely-contended lock. What's going on that causes uncached pref reads
>> to show up so hot in profiles? Do we have a list of problematic pref
>> keys?
>
>
> So, at the moment, we read about 10,000 preferences at startup in debug
> builds. That number is probably slightly lower in non-debug builds, bug we
> don't collect stats there. We're working on reducing that number (which is
> why we collect statistics in the first place), but for now, it's still quite
> high.
>
>
> As for the cost of locks... On my machine, in a tight loop, the cost of a
> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
> ideal circumstances, on a single core of a very fast CPU, with very fast
> RAM, everything cached, and no contention. If we could extrapolate that to
> normal usage, it would be about a third of a ms of additional overhead for
> startup. I've fought hard enough for 1ms startup time improvements, but
> *shrug*, if it were that simple, it might be acceptable.
>
> But I have no reason to think the lock would be rarely contended. We read
> preferences *a lot*, and if we allowed access from background threads, I
> have no doubt that we would start reading them a lot from background threads
> in addition to reading them a lot from the main thread.
>
> And that would mean, in addition to lock contention, cache contention and
> potentially even NUMA issues. Those last two apply to atomic var caches too,
> but at least they generally apply only to the specific var caches being
> accessed off-thread, rather than pref look-ups in general.
>
>
> Maybe we could get away with it at first, as long as off-thread usage
> remains low. But long term, I think it would be a performance foot-gun. And,
> paradoxically, the less foot-gunny it is, the less useful it probably is,
> too. If we're only using it off-thread in a few places, and don't have to
> worry about contention, why are we bothering with locking and off-thread
> access in the first place?
>
>
>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
>> wrote:
>>>
>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>>
>>>>
>>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>>>
>>>>>
>>>>> tl;dr: A major change to the architecture preference service has just
>>>>> landed, so please be on the lookout for regressions.
>>>>>
>>>>> We've been working for the last few weeks on rearchitecting the
>>>>> preference service to work better in our current and future
>>>>> multi-process
>>>>> configurations, and those changes have just landed in bug 1471025.
>>>>
>>>>
>>>>
>>>> Looks like a great step forward!
>>>>
>>>> While we're thinking about the prefs service, is there any possibility
>>>> we
>>>> could enable off-main-thread access to preferences?
>>>
>>>
>>>
>>> I think the chances of that are pretty close to 0, but I'll defer to
>>> Nick.
>>>
>>> We definitely can't afford the locking overhead—preference look-ups
>>> already
>>> show up in profiles without it. And even the current limited exception
>>> that
>>> we grant Stylo while it has the main thread blocked causes problems (bug
>>> 1474789), since it makes it impossible to update statistics for those
>>> reads,
>>> or switch to Robin Hood hashing (which would make our hash tables much
>>> smaller and more efficient, but requires read operations to be able to
>>> move
>>> entries).
>>>
>>>> I am aware that in simple

Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Jeff Gilbert
I have filed bug 1477436: "Preferences::Get* is not threadsafe/is main
thread only"
https://bugzilla.mozilla.org/show_bug.cgi?id=1477436

On Fri, Jul 20, 2018 at 11:36 AM, Eric Rahm  wrote:
> We *could* special case prefs with an appropriate data structure that works
> in a thread-safe nature; as far as RWLock's go, we do have one in tree [1].
> This has gone off the rails a bit from Kris' original announcement, which
> I'll reiterate: Watch out for prefs related bustage.
>
> Jeff, would you mind filing a bug for further discussion of off-main-thread
> access as a future improvement?
>
> [1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h
>
> On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione 
> wrote:
>>
>> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>>>
>>> Using a classic read/write exclusive lock, we would only every contend
>>> on read+write or write+write, which are /rare/.
>>
>>
>> That might be true if we gave up on the idea of switching to Robin Hood
>> hashing. But if we don't, then every lookup is a potential write, which
>> means every lookup required a write lock.
>>
>> We also don't really have any good APIs for rwlocks at the moment. Which,
>> admittedly, is a solvable problem.
>>
>>
>>> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
>>> wrote:
>>>>
>>>> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>>>>
>>>>>
>>>>> We should totally be able to afford the very low cost of a
>>>>> rarely-contended lock. What's going on that causes uncached pref reads
>>>>> to show up so hot in profiles? Do we have a list of problematic pref
>>>>> keys?
>>>>
>>>>
>>>>
>>>> So, at the moment, we read about 10,000 preferences at startup in debug
>>>> builds. That number is probably slightly lower in non-debug builds, bug
>>>> we
>>>> don't collect stats there. We're working on reducing that number (which
>>>> is
>>>> why we collect statistics in the first place), but for now, it's still
>>>> quite
>>>> high.
>>>>
>>>>
>>>> As for the cost of locks... On my machine, in a tight loop, the cost of
>>>> a
>>>> entering and exiting MutexAutoLock is about 37ns. This is pretty close
>>>> to
>>>> ideal circumstances, on a single core of a very fast CPU, with very fast
>>>> RAM, everything cached, and no contention. If we could extrapolate that
>>>> to
>>>> normal usage, it would be about a third of a ms of additional overhead
>>>> for
>>>> startup. I've fought hard enough for 1ms startup time improvements, but
>>>> *shrug*, if it were that simple, it might be acceptable.
>>>>
>>>> But I have no reason to think the lock would be rarely contended. We
>>>> read
>>>> preferences *a lot*, and if we allowed access from background threads, I
>>>> have no doubt that we would start reading them a lot from background
>>>> threads
>>>> in addition to reading them a lot from the main thread.
>>>>
>>>> And that would mean, in addition to lock contention, cache contention
>>>> and
>>>> potentially even NUMA issues. Those last two apply to atomic var caches
>>>> too,
>>>> but at least they generally apply only to the specific var caches being
>>>> accessed off-thread, rather than pref look-ups in general.
>>>>
>>>>
>>>> Maybe we could get away with it at first, as long as off-thread usage
>>>> remains low. But long term, I think it would be a performance foot-gun.
>>>> And,
>>>> paradoxically, the less foot-gunny it is, the less useful it probably
>>>> is,
>>>> too. If we're only using it off-thread in a few places, and don't have
>>>> to
>>>> worry about contention, why are we bothering with locking and off-thread
>>>> access in the first place?
>>>>
>>>>
>>>>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>>>>>>
>>>>>>>>

Re: PSA: Major preference service architecture changes inbound

2018-07-17 Thread Jeff Gilbert
We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?

On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione  wrote:
> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>
>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>
>>> tl;dr: A major change to the architecture preference service has just
>>> landed, so please be on the lookout for regressions.
>>>
>>> We've been working for the last few weeks on rearchitecting the
>>> preference service to work better in our current and future multi-process
>>> configurations, and those changes have just landed in bug 1471025.
>>
>>
>> Looks like a great step forward!
>>
>> While we're thinking about the prefs service, is there any possibility we
>> could enable off-main-thread access to preferences?
>
>
> I think the chances of that are pretty close to 0, but I'll defer to Nick.
>
> We definitely can't afford the locking overhead—preference look-ups already
> show up in profiles without it. And even the current limited exception that
> we grant Stylo while it has the main thread blocked causes problems (bug
> 1474789), since it makes it impossible to update statistics for those reads,
> or switch to Robin Hood hashing (which would make our hash tables much
> smaller and more efficient, but requires read operations to be able to move
> entries).
>
>> I am aware that in simple cases, this can be achieved via the
>> StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
>> from other threads. But this doesn't help in my use case, where I need
>> another thread to be able to query an extensible set of pref names that are
>> not fully known at compile time.
>>
>> Currently, it looks like to do this, I'll have to iterate over the
>> relevant prefs branch(es) ahead of time (on the main thread) and copy all
>> the entries to some other place that is then available to my worker threads.
>> For my use case, at least, the other threads only need read access;
>> modifying prefs could still be limited to the main thread.
>
>
> That's probably your best option, yeah. Although I will say that those kinds
> of extensible preference sets aren't great for performance or memory usage,
> so switching to some other model might be better.
>
>> Possible? Or would the overhead of locking be too crippling?
>
>
> The latter, I'm afraid.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-18 Thread Jeff Gilbert
It feels like the committee is burnt out on trying to solve the
general library problem, but contemplating something massively complex
like this instead doesn't follow, and is an answer to the wrong
question.

Make it easier to integrate libraries and we wouldn't see kludge
proposals like this.

On Wed, Jul 18, 2018 at 9:45 AM, Botond Ballo  wrote:
> Hi everyone,
>
> With the proposal for a standard 2D graphics library now on ice [1],
> members of the C++ standards committee have been investigating
> alternative ways of giving C++ programmers a standard way to write
> graphical and interactive applications, in a way that leverages
> existing standards and imposes a lower workload on the committee.
>
> A recent proposal along these lines is for a standard embedding
> facility called "web_view", inspired by existing embedding APIs like
> Android's WebView:
>
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
>
> As we have some experience in the embedding space here at Mozilla, I
> was wondering if anyone had feedback on this embedding library
> proposal. This is an early-stage proposal, so high-level feedback on
> the design and overall approach is likely to be welcome.
>
> Thanks!
> Botond
>
> [1] 
> https://botondballo.wordpress.com/2018/06/20/trip-report-c-standards-meeting-in-rapperswil-june-2018/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: minimum version of clang will become 3.9 soon.

2018-09-10 Thread Jeff Gilbert
I'm always happy for these updates! Thanks!

On Sat, Sep 8, 2018 at 4:19 PM,   wrote:
> This is already practically the case as there are unfixed compilation errors 
> with clang 3.8 (https://bugzilla.mozilla.org/show_bug.cgi?id=1487810).
>
> Bug https://bugzilla.mozilla.org/show_bug.cgi?id=1482196 will fix the build 
> tools so they will require at least clang 3.9 and fail early if not.
>
> I plan to land this pretty soon barring objections.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Visual Studio 2017 15.6 now required to build 61+

2018-03-13 Thread Jeff Gilbert
The patches have landed. Thanks!

Are we ready to update this page?:
https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code

On Mon, Mar 12, 2018 at 5:29 PM, Ryan VanderMeulen
 wrote:
> While I know I'm tempting fate by sending this out while the patches are
> still on autoland, I wanted to start giving people a heads-up now that bug
> 1424281 has been pushed, which will make Visual Studio 2017 15.6 (Update 6)
> the minimum version required to build Gecko 61+ once it merges to m-c.
>
> This change brings a number of improvements over version 15.4, which is
> what we've been using in automation since Gecko 58, including performance
> wins and better C++17 support. Release notes for versions 15.5 and 15.6 are
> linked below with more details:
> https://docs.microsoft.com/en-us/visualstudio/releasenotes/vs2017-relnotes-v15.5#a-idlibimprov-a-visual-c-improvements
> https://docs.microsoft.com/en-us/visualstudio/releasenotes/vs2017-relnotes#CPlusPlus
> https://blogs.msdn.microsoft.com/vcblog/2017/12/19/c17-progress-in-vs-2017-15-5-and-15-6/
>
> If you're currently using Visual Studio 2015, you can download the 2017
> installer from https://www.visualstudio.com/vs/community/. If you already
> have 2017 installed, you should only need to launch the Visual Studio
> Installer already on your system and follow the update prompts. Note that
> the Windows SDK minimum version was also bumped to version 15063 to match
> what we've been using in automation. It is also installable via the Visual
> Studio Installer if needed.
>
> Enjoy!
>
> -Ryan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >