Re: Graceful Platform Degradation
Cameron Kaiser wrote: For TenFourFox, I've often toyed with implementing switches for box-shadow, blur, etc., so that people on the very low end of the spec (we still support G3 Macintoshes) can turn these rather expensive features off. I'd rather do that in a web-standard way than a one-off local change, though. Surely any user can just add this to their userContent.css *|* { box-shadow: none !important; } -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
MOZ_ASSUME_UNREACHABLE is being misused
Hi, Despite a helpful, scary comment above its definition in mfbt/Assertions.h, MOZ_ASSUME_UNREACHABLE is being misused. Not pointing fingers to anything specific here, but see http://dxr.mozilla.org/mozilla-central/search?q=MOZ_ASSUME_UNREACHABLEcase=true. The only reason why one might want an unreachability marker instead of simply doing nothing, is as an optimization --- a rather arcane, dangerous, I-know-what-I-am-doing kind of optimization. How can we help people not misuse? Should we rename it to something more explicit about what it is doing, such as perhaps MOZ_UNREACHABLE_UNDEFINED_BEHAVIOR ? Should we give typical code a macro that does what they want and sounds like what they want? Really, what typical code wants is a no-operation instead of undefined-behavior; now, that is exactly the same as MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying, and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH but only affecting DEBUG builds? What would be a good name for it? Is it worth keeping a close analogy with the unreachable-marker macro to steer people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even just MOZ_UNREACHABLE? So that people couldn't miss it when they look for UNREACHABLE macros? Benoit ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
On 3/28/14, 12:25 PM, Benoit Jacob wrote: Should we give typical code a macro that does what they want and sounds like what they want? Really, what typical code wants is a no-operation instead of undefined-behavior; now, that is exactly the same as MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying, and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH but only affecting DEBUG builds? What would be a good name for it? Is it worth keeping a close analogy with the unreachable-marker macro to steer people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even just MOZ_UNREACHABLE? So that people couldn't miss it when they look for UNREACHABLE macros? How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like: #define MOZ_ASSERT_UNREACHABLE() \ MOZ_ASSERT(false, MOZ_ASSERT_UNREACHABLE) #define MOZ_CRASH_UNREACHABLE() \ do { \ MOZ_ASSUME_UNREACHABLE_MARKER();\ MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \ } while (0) chris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Warning about mutating the [[Prototype]] of an object ?
Recently after a refesh of code from comm-central, I noticed that running |make mozmill| TB test suite using full DEBUG BUILD of TB produced the following warning lines: System JS : WARNING file:///REF-OBJ-DIR/objdir-tb3/mozilla/dist/bin/components/steelApplication.js:783 - mutating the [[Prototype]] of an object will cause your code to run very slowly; instead create the object with the correct initial [[Prototype]] value using Object.create I get the warning lines as often as TB binary is invoked (35 times). I think it is an enhancement of JS system to print out the warning like above. Looking at the installed steelApplication.js under object directory, I found the following line which is marked as =. //@line 63 /REF-COMM-CENTRAL/comm-central/mail/steel/steelApplication.js = Application.prototype.__proto__ = extApplication.prototype; const NSGetFactory = XPCOMUtils.generateNSGetFactory([Application]); I suspect that the code may not run very often (once each TB invocation). But is ignoring this issue safe and OK, or should we rewrite the code (but in what way)? BTW, It is nice that JS interpreter (?) seems to have become developer-friendly and produce something like the following which helps to trace the execution of the program at the time of exception very much: * Call to xpconnect wrapped JSObject produced this error: * [Exception... [JavaScript Error: null has no properties {file: chrome://messenger/content/folderPane.js line: 1796}]'[JavaScript Error: null has no properties {file: chrome://messenger/content/folderPane.js line: 1796}]' when calling method: [nsIFolderListener::OnItemRemoved] nsresult: 0x80570021 (NS_ERROR_XPC_JAVASCRIPT_ERROR_WITH_DETAILS) location: JS frame :: resource://mozmill/modules/frame.js - file:///REF-COMM-CENTRAL/comm-central/mail/test/mozmill/folder-pane/test-folder-names-in-recent-mode.js :: test_folder_names_in_recent_view_mode :: line 67 data: yes] Considering that developers probably spend more than 2/3 or 3/4 of their time debugging/testing code [I am not kidding], such developer-friendly error output goes a long way to help the developer community. Very nice work! I think such improvement for testing/debugging is much more important than, say, making the speed of JS interpreter 5% or something. But people's opinions probably vary. It all boils down to where to place what priority. I tend to value developer's time very much. Making sure to use developer's time efficiently is the only way to sustain long-term development cycle. TIA ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
The code should be fixed. It's my understanding that the existing idiom used throughout the Thunderbird tree is still okay to do since the prototype chain is created at object initialization time and so there's no actual mutation of the chain: function Application() { } Application.prototype = { __proto__: extApplication.prototype, }; Andrew ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
On 3/28/14, 10:31 AM, ISHIKAWA,chiaki wrote: Recently after a refesh of code from comm-central, I noticed that running |make mozmill| TB test suite using full DEBUG BUILD of TB produced the following warning lines: System JS : WARNING file:///REF-OBJ-DIR/objdir-tb3/mozilla/dist/bin/components/steelApplication.js:783 - mutating the [[Prototype]] of an object will cause your code to run very slowly; instead create the object with the correct initial [[Prototype]] value using Object.create I'm concerned about this as well. In bug 939072, we're looking at doing some dynamic prototype mutation so Sqlite.jsm can do things like auto-close databases that have been inactive for a while (freeing resources in the process). The prototype mutation allows us to do this transparently without consumers of the API having to care about the state of the connection. I'd appreciate if someone could chime in on bug 939072 with a recommended solution that won't bog down performance. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Graceful Platform Degradation
On 03/27/2014 10:26 AM, Nicholas Nethercote wrote: This sounds like a worthy and interesting idea, but also a very difficult one. PC games allow the user to turn certain features (mostly graphics related ones) on and off so that they can find their own level of acceptable performance/quality. This doesn't seem like the right approach for viewing Web content. Yeah, games are a much easier case. The content is known ahead of time (so the degradation can be carefully tested), and typically graphics dominates the hardware requirements. In a browser, the former is untrue, and the latter is often untrue -- degradation of audiovisual elements seems tractable, but what if it's JS execution that's causing the slowness? Perhaps there could be a way to annotate the HTML/JS/CSS code to indicate which parts are less important. I.e. let the page author dictate what is less important. That would facilitate testing -- a web developer with a powerful machine could turn on the browser's stress mode and get a good sense of what would change. Whether developers would bother with it, though, I don't know. Nick Perhaps annotating setTimeout/Interval callbacks and animation frame callbacks with { priority: low } and process such callbacks only if we can keep up with 60Hz. priority: medium perhaps when 30Hz. But anyhow, keeping separate lists for less-important async stuff might make it simpler for web devs to opt-in to different perf characteristics. -Olli ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Spring cleaning: Reducing Number Footprint of HG Repos
Hi, Mozilla's Manifest principle #8 states: 8.Transparent community-based processes promote participation, accountability and trust. Decision making, afaik, is a process. So... Taras Glek wrote: *User Repos* TLDR: I would like to make user repos read-only by April 30th. We should archive them by May 31st. Time spent operating user repositories could be spent reducing our end-to-end continuous integration cycles. These do not seem like mission-critical repos, seems like developers would be better off hosting these on bitbucket or github. Using a 3rd-party host has obvious benefits for collaboration self-service that our existing system will never meet. I'd like to question the above. I would like to make user repos ... Was this decision arrived by yourself, or through a transparent process with your releng team? And if it was transparent, where was it discussed? Would I be privy to these discussions? Or is this decision similar to the DT issue? So, in the name of transparency, how exactly did you come about in deciding this? Reading your message, I understood the possible issues(read: Why?): 1) Resource: Time 2) Resource: Disk space 3) Resource: maintenance Is the machine/vm/whatever that holds the user repos and/or non-user repos anyway tied to the CI systems? i.e Does the CI system also contain the user/non-user repos? Also, are you sure that these are not 'mission-critical' repos (user-repos and non-user repos)? The word 'seems' imply you're not sure. Don't get me wrong. You have every right to make these decisions. I know (with 100% certainty) that this decision affects a few community projects. I'm not saying it isn't technically 'feasible' to move repos away from Mozilla's systems. It is technically do-able. Feasibility is project-dependent. What I'm not 100% certain is whether it is the 'right' thing to do. Once you have migrated your repository, please comment in https://bugzilla.mozilla.org/show_bug.cgi?id=988628so we can free some disk space. This covers #2 in the list. Disk space. From your post to gps, I quote: The fact that repos keep growing means that we'll have to do this migration again soon. We are at 260gb/300gb. I can see why this might lead you to make your decision; but is this the only alternative? I mean 300GB? How much is 1TB in the US? AIUI, having user and non-user repos don't take that much processing power and the minimum HD size you can get now-a-days is 500GB. Why not migrate it to a 1TB drive? How long would that last? How long did 300GB last? *Non-User Repos* There are too many non-user repos. I'm not convinced we should host ash, oak, other project branches internally. I think we should focus on mission-critical repos only. There should be less than a dozen of those. I would like to stop hosting non-mission-critical repositories by end of Q2. How exactly did you come to the conclusion that 'there should be less than a dozen of those'? I'm really curious. Did you go through each non-user repos (as you did with the user repos) and decided which ones fitted to your criteria as 'mission-critical'? Which are the 'dozen'(or less) repos are you talking about? This is a soft target. I don't have a concrete plan here. I'd like to start experimenting with moving project branches elsewhere and see where that takes us. Pardon me, but is this the right approach? We're talking about a lot of project branches here. 'Start experimenting' isn't something that would go well with already established processes/systems. Moving them isn't a technical issue.(We've established that it's technically do-able.) It's a systematic issue. Moving a project, say A, to a different system (3rd party or otherwise), require some changes to the underlying systems/processes that require that repo to be where it is. So those need to be changed. Then the processes/systems are checked for errors. If it doesn't work, move the project branch elsewhere. Another set of changes. Do-able? Sure. I'm not saying it isn't do-able. Is it necessarily the right thing to do? *What my hg repo needs X/Y that 3rd-party services do not provide?* If you have a good reason to use a feature not supported by github/bitbucket, we should continue hosting your repo at Mozilla. *Why Not Move Everything to Github/Bitbucket/etc?* Mozilla prefers to keep repositories public by-default. This does not fit Github's business model which is built around private repos. Github's free service does not provide any availability guarantee. There is also a problem of github not supporting hg. I'm not completely sure why we can't move everything to bitbucket. Some of it is to do with anecdotal evidence of robustness problems. Some of it is lack of hooks (sans post-receive POSTs).Additionally, as with Github there is no availability guarantee. Umm. Haven't you already given reasons why moving everything to bitbucket isn't a good idea? (No availability guaranteed would
Re: Graceful Platform Degradation
Do you think this should require author opt-in? I was thinking that we spec what we degrade automagically so it's less of a black box even without opt-in. --Jet - Original Message - From: smaug sm...@welho.com To: Nicholas Nethercote n.netherc...@gmail.com, Jet Villegas j...@mozilla.com Sent: Friday, March 28, 2014 11:16:42 AM Subject: Re: Graceful Platform Degradation Perhaps annotating setTimeout/Interval callbacks and animation frame callbacks with { priority: low } and process such callbacks only if we can keep up with 60Hz. priority: medium perhaps when 30Hz. But anyhow, keeping separate lists for less-important async stuff might make it simpler for web devs to opt-in to different perf characteristics. -Olli ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Intent to implement: HTML `picture` element
Sending this on behalf of John Schoenick, who is doing the actual implementation. #Summary The picture element finally brings responsive images to the Web! It allows developers to list multiple sources and have the browser intelligently select one that is best suited for users (based on device capabilities, supported image formats, dpr, or possibly even a user preference). #Bug https://bugzilla.mozilla.org/show_bug.cgi?id=870022 #Link to standard http://picture.responsiveimages.org/ # Platform coverage All. # Estimated or target release: 31/32 for landing, but parts may be pref'd off initially. # Other goodness Parallel implementation happening on the Blink side! And positive signals from the WebKit community \0/. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
On 3/28/14 2:15 PM, Gregory Szorc wrote: I'm concerned about this as well. Is the concern the spew, or the performance of changing __proto__? In practice, what gets slow with a changing __proto__ is property access on the object, because the JS engine throws away type inference information and marks the object unoptimizable when you change __proto__. But slow here is a relative term. Last I measured, something like a basic property get in Ion code went from 2-3 instructions to 20-30 instructions when we deoptimize. That's huge on microbenchmarks or for objects that are being used to store simulation state in physics simulations or what not, but I doubt the difference matters for your sqlite connection case. In fact, I doubt the consumers of your API are even hot enough to get Ion-compiled. And the impact of __proto__ sets is much lower in baseline, and nonexistent in interp, I believe. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: HTML `picture` element
This is great news! no more headaches and js hacks ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
On 03/28/2014 02:01 PM, Andrew Sutherland wrote: It's my understanding that the existing idiom used throughout the Thunderbird tree is still okay to do since the prototype chain is created at object initialization time and so there's no actual mutation of the chain: function Application() { } Application.prototype = { __proto__: extApplication.prototype, }; At present we only warn for direct property-sets not part of object literals. This was because property-sets are more visibly, obviously, clearly wrong in this regard, while *in theory* the other form could be made not so bad. But right now, that pattern above is just as bad in SpiderMonkey. And, honestly, I suspect we'll have other performance work and features to implement for a long time to come. So really you shouldn't do this pattern, either. But we don't warn about it, just yet, to minimize warning fatigue. The right fix -- judging by the spew/code earlier in this thread -- is probably to port the patch from https://bugzilla.mozilla.org/show_bug.cgi?id=985742. Jeff ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Graceful Platform Degradation
On 03/28/2014 08:34 PM, Jet Villegas wrote: Do you think this should require author opt-in? I was thinking that we spec what we degrade automagically so it's less of a black box even without opt-in. We probably need both. opt-in and something automagical degrade. We certainly could lower refresh rate in some cases in order to try to avoid extra layout flushes etc. --Jet - Original Message - From: smaug sm...@welho.com To: Nicholas Nethercote n.netherc...@gmail.com, Jet Villegas j...@mozilla.com Sent: Friday, March 28, 2014 11:16:42 AM Subject: Re: Graceful Platform Degradation Perhaps annotating setTimeout/Interval callbacks and animation frame callbacks with { priority: low } and process such callbacks only if we can keep up with 60Hz. priority: medium perhaps when 30Hz. But anyhow, keeping separate lists for less-important async stuff might make it simpler for web devs to opt-in to different perf characteristics. -Olli ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
On 03/28/2014 11:01 AM, Andrew Sutherland wrote: The code should be fixed. It's my understanding that the existing idiom used throughout the Thunderbird tree is still okay to do since the prototype chain is created at object initialization time and so there's no actual mutation of the chain: function Application() { } Application.prototype = { __proto__: extApplication.prototype, }; Andrew It sounds that what you want to do is some-kind of inheritance, can any of the solution documented on MDN can help you on this? https://developer.mozilla.org/en-US/docs/Web/JavaScript/Introduction_to_Object-Oriented_JavaScript#Inheritance Your issue is similar to the other one reported in B2G (Bug 984146) -- Nicolas B. Pierron ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
The prototype mutation is hardly necessary for bug 939072, it is just a nice way to avoid a few indirections. On 3/28/14 7:15 PM, Gregory Szorc wrote: I'm concerned about this as well. In bug 939072, we're looking at doing some dynamic prototype mutation so Sqlite.jsm can do things like auto-close databases that have been inactive for a while (freeing resources in the process). The prototype mutation allows us to do this transparently without consumers of the API having to care about the state of the connection. I'd appreciate if someone could chime in on bug 939072 with a recommended solution that won't bog down performance. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
2014-03-28 13:23 GMT-04:00 Chris Peterson cpeter...@mozilla.com: On 3/28/14, 12:25 PM, Benoit Jacob wrote: Should we give typical code a macro that does what they want and sounds like what they want? Really, what typical code wants is a no-operation instead of undefined-behavior; now, that is exactly the same as MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying, and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH but only affecting DEBUG builds? What would be a good name for it? Is it worth keeping a close analogy with the unreachable-marker macro to steer people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even just MOZ_UNREACHABLE? So that people couldn't miss it when they look for UNREACHABLE macros? How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like: #define MOZ_ASSERT_UNREACHABLE() \ MOZ_ASSERT(false, MOZ_ASSERT_UNREACHABLE) #define MOZ_CRASH_UNREACHABLE() \ do { \ MOZ_ASSUME_UNREACHABLE_MARKER();\ MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \ } while (0) MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler feel free to arbitrarily miscompile this, and anything from that point on in this branch, as you may assume that this code is unreachable. So it doesn't really serve any purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER. Benoit chris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
On Fri 28 Mar 2014 01:05:34 PM PDT, Benoit Jacob wrote: 2014-03-28 13:23 GMT-04:00 Chris Peterson cpeter...@mozilla.com: On 3/28/14, 12:25 PM, Benoit Jacob wrote: Should we give typical code a macro that does what they want and sounds like what they want? Really, what typical code wants is a no-operation instead of undefined-behavior; now, that is exactly the same as MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying, and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH but only affecting DEBUG builds? What would be a good name for it? Is it worth keeping a close analogy with the unreachable-marker macro to steer people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even just MOZ_UNREACHABLE? So that people couldn't miss it when they look for UNREACHABLE macros? How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like: #define MOZ_ASSERT_UNREACHABLE() \ MOZ_ASSERT(false, MOZ_ASSERT_UNREACHABLE) #define MOZ_CRASH_UNREACHABLE() \ do { \ MOZ_ASSUME_UNREACHABLE_MARKER();\ MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \ } while (0) MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler feel free to arbitrarily miscompile this, and anything from that point on in this branch, as you may assume that this code is unreachable. So it doesn't really serve any purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER. MOZ_OPTIMIZE_FOR_UNREACHABLE? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE. It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines. Consider MOZ_ASSERT_UNREACHABLE being the same as MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds. -Jeff - Original Message - From: Steve Fink sf...@mozilla.com To: Benoit Jacob jacob.benoi...@gmail.com Cc: Chris Peterson cpeter...@mozilla.com, dev-platform dev-platform@lists.mozilla.org Sent: Friday, March 28, 2014 1:20:39 PM Subject: Re: MOZ_ASSUME_UNREACHABLE is being misused On Fri 28 Mar 2014 01:05:34 PM PDT, Benoit Jacob wrote: 2014-03-28 13:23 GMT-04:00 Chris Peterson cpeter...@mozilla.com: On 3/28/14, 12:25 PM, Benoit Jacob wrote: Should we give typical code a macro that does what they want and sounds like what they want? Really, what typical code wants is a no-operation instead of undefined-behavior; now, that is exactly the same as MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying, and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH but only affecting DEBUG builds? What would be a good name for it? Is it worth keeping a close analogy with the unreachable-marker macro to steer people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even just MOZ_UNREACHABLE? So that people couldn't miss it when they look for UNREACHABLE macros? How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like: #define MOZ_ASSERT_UNREACHABLE() \ MOZ_ASSERT(false, MOZ_ASSERT_UNREACHABLE) #define MOZ_CRASH_UNREACHABLE() \ do { \ MOZ_ASSUME_UNREACHABLE_MARKER();\ MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \ } while (0) MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler feel free to arbitrarily miscompile this, and anything from that point on in this branch, as you may assume that this code is unreachable. So it doesn't really serve any purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER. MOZ_OPTIMIZE_FOR_UNREACHABLE? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
On 3/28/14 4:03 PM, David Rajchenbach-Teller wrote: The prototype mutation is hardly necessary for bug 939072, it is just a nice way to avoid a few indirections. Note that indirection is not free in perf terms either. But again, I doubt that the code in bug 939072 is gated on property access performance. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote: My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE. It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines. Consider MOZ_ASSERT_UNREACHABLE being the same as MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds. I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the name should make it clear that it's dangerous for the code to be reachable (i.e., the compiler can produce undefined behavior). MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for that, though it's a bit of a mouthful. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
2014-03-28 16:48 GMT-04:00 L. David Baron dba...@dbaron.org: On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote: My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE. It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines. Consider MOZ_ASSERT_UNREACHABLE being the same as MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds. I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the name should make it clear that it's dangerous for the code to be reachable (i.e., the compiler can produce undefined behavior). MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for that, though it's a bit of a mouthful. I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new name of MOZ_ASSUME_UNREACHABLE sound really scary. I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as it should rarely be used. If SpiderMonkey gurus find that they need it often, they can always alias it in some local header. I think that _ASSUME_ is too hard to understand, probably because this doesn't explicitly say what would happen if the assumption were violated. One has to understand that this is introducing a *compiler* assumption to understand that violating it would be Undefined Behavior. How about MOZ_ALLOW_COMPILER_TO_GO_CRAZY ;-) This is technically correct, and explicit! Benoit -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
On 3/28/14, 4:05 PM, Benoit Jacob wrote: #define MOZ_CRASH_UNREACHABLE() \ do { \ MOZ_ASSUME_UNREACHABLE_MARKER();\ MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \ } while (0) MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler feel free to arbitrarily miscompile this, and anything from that point on in this branch, as you may assume that this code is unreachable. So it doesn't really serve any purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER. I included MOZ_ASSUME_UNREACHABLE_MARKER because that macro is the compiler-specific optimize me intrinsic, which I believe was the whole point of the original MOZ_ASSUME_UNREACHABLE. AFAIU, MOZ_ASSUME_UNREACHABLE_MARKER crashes on all Gecko platforms, but I included MOZ_CRASH to ensure the behavior was consistent for all platforms. chris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
warp, Facebook's new C/C++ preprocessor
warp is Facebook's new C/C++ preprocessor, written by Walter Bright (in D, of course). They claim build time (not just preprocessing time) of 10% to 40%. https://code.facebook.com/posts/476987592402291/under-the-hood-warp-a-fast-c-and-c-preprocessor chris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
On 14-03-28 05:14 PM, Benoit Jacob wrote: 2014-03-28 16:48 GMT-04:00 L. David Baron dba...@dbaron.org: On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote: My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE. It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines. Consider MOZ_ASSERT_UNREACHABLE being the same as MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds. I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the name should make it clear that it's dangerous for the code to be reachable (i.e., the compiler can produce undefined behavior). MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for that, though it's a bit of a mouthful. I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new name of MOZ_ASSUME_UNREACHABLE sound really scary. I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as it should rarely be used. If SpiderMonkey gurus find that they need it often, they can always alias it in some local header. I think that _ASSUME_ is too hard to understand, probably because this doesn't explicitly say what would happen if the assumption were violated. One has to understand that this is introducing a *compiler* assumption to understand that violating it would be Undefined Behavior. How about MOZ_ALLOW_COMPILER_TO_GO_CRAZY ;-) This is technically correct, and explicit! MOZ_UNDEFINED_BEHAVIOUR() ? Undefined behaviour is usually enough to get C/++ programmers' attention. --m. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
2014-03-28 17:19 GMT-04:00 Mike Habicher mi...@mozilla.com: MOZ_UNDEFINED_BEHAVIOUR() ? Undefined behaviour is usually enough to get C/++ programmers' attention. I thought about that too; then I remembered that it is only at least a year _after_ some time at Mozilla working on Gecko, that I started appreciating how scary Undefined Behavior is. If I remember correctly, before that, I was misunderstanding this concept as just Implementation-defined behavior i.e. not affecting the behavior of other C++ statements, like Undefined Behavior does. Benoit --m. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: warp, Facebook's new C/C++ preprocessor
On 3/28/14, 2:16 PM, Chris Peterson wrote: warp is Facebook's new C/C++ preprocessor, written by Walter Bright (in D, of course). They claim build time (not just preprocessing time) of 10% to 40%. https://code.facebook.com/posts/476987592402291/under-the-hood-warp-a-fast-c-and-c-preprocessor I have a patch at https://hg.stage.mozaws.net/gecko-collab/rev/d6ea4c36aa65 that will change the build system to measure different phases of compiler execution. For each compile command, it essentially runs -E + -fsyntax-only + -c as separate commands and measures the results. I'm not sure if isolating the run time of -E is sufficient to measure the performance overhead of the preprocessor. But what I do know is that the last time I ran this (a few months ago), mozilla-central's problem was in AST generation and codegen, not preprocessor. I believe -E processes were only a few ms on average (assuming SSD, which you should all be using if you have a choice - MoCo employees have a choice) and AST/codegen were an order of magnitude longer. I encourage people to play around with this patch to measure things. Better yet, try to hook up Warp and see if it actually makes a difference. If it does, then by all means let's integrate it into the build system like we already do with ccache. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: warp, Facebook's new C/C++ preprocessor
So how long until we set up opt-in build time telemetry? ;) Andrew - Original Message - On 3/28/14, 2:16 PM, Chris Peterson wrote: warp is Facebook's new C/C++ preprocessor, written by Walter Bright (in D, of course). They claim build time (not just preprocessing time) of 10% to 40%. https://code.facebook.com/posts/476987592402291/under-the-hood-warp-a-fast-c-and-c-preprocessor I have a patch at https://hg.stage.mozaws.net/gecko-collab/rev/d6ea4c36aa65 that will change the build system to measure different phases of compiler execution. For each compile command, it essentially runs -E + -fsyntax-only + -c as separate commands and measures the results. I'm not sure if isolating the run time of -E is sufficient to measure the performance overhead of the preprocessor. But what I do know is that the last time I ran this (a few months ago), mozilla-central's problem was in AST generation and codegen, not preprocessor. I believe -E processes were only a few ms on average (assuming SSD, which you should all be using if you have a choice - MoCo employees have a choice) and AST/codegen were an order of magnitude longer. I encourage people to play around with this patch to measure things. Better yet, try to hook up Warp and see if it actually makes a difference. If it does, then by all means let's integrate it into the build system like we already do with ccache. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_ASSUME_UNREACHABLE is being misused
2014-03-28 17:14 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com: 2014-03-28 16:48 GMT-04:00 L. David Baron dba...@dbaron.org: On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote: My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE. It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines. Consider MOZ_ASSERT_UNREACHABLE being the same as MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds. I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the name should make it clear that it's dangerous for the code to be reachable (i.e., the compiler can produce undefined behavior). MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for that, though it's a bit of a mouthful. I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new name of MOZ_ASSUME_UNREACHABLE sound really scary. I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as it should rarely be used. If SpiderMonkey gurus find that they need it often, they can always alias it in some local header. I think that _ASSUME_ is too hard to understand, probably because this doesn't explicitly say what would happen if the assumption were violated. One has to understand that this is introducing a *compiler* assumption to understand that violating it would be Undefined Behavior. How about MOZ_ALLOW_COMPILER_TO_GO_CRAZY ;-) This is technically correct, and explicit! By the way, here is an anecdote. In some very old versions of GCC, when the compiler identified Undefined Behavior, it emitted system commands to try launching some video games that might be present on the system (see: http://feross.org/gcc-ownage/ ). That actually helped more to raise awareness of what Undefined Behavior means, than any serious explanation... So... maybe MOZ_MAYBE_PLAY_STARCRAFT? Benoit Benoit -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
Boris Zbarsky wrote: On 3/28/14 2:15 PM, Gregory Szorc wrote: I'm concerned about this as well. Is the concern the spew, or the performance of changing __proto__? In practice, what gets slow with a changing __proto__ is property access on the object, because the JS engine throws away type inference information and marks the object unoptimizable when you change __proto__. But slow here is a relative term. Last I measured, something like a basic property get in Ion code went from 2-3 instructions to 20-30 instructions when we deoptimize. That's huge on microbenchmarks or for objects that are being used to store simulation state in physics simulations or what not, but I doubt the difference matters for your sqlite connection case. In fact, I doubt the consumers of your API are even hot enough to get Ion-compiled. And the impact of __proto__ sets is much lower in baseline, and nonexistent in interp, I believe. What about calls from xpconnect (the original object was a component exposed as a chrome javascript global property)? -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: warp, Facebook's new C/C++ preprocessor
Gregory Szorc wrote: I have a patch at https://hg.stage.mozaws.net/gecko-collab/rev/d6ea4c36aa65 that will change the build system to measure different phases of compiler execution. For each compile command, it essentially runs -E + -fsyntax-only + -c as separate commands and measures the results. Note that -E has a significant overhead on Windows; when I switched configure's header detection tests from -E to -c it shaved a noticeable fraction off its run time. -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
B2G Release scheduling and tree rules
[This is a slightly cropped version of an email that went out on internal lists. The only thing removed is a few partner-related dates.] Hi All, First off, we've decided to call the next FirefoxOS release (previously called v1.5) our v2.0 release. Note that this in and of itself doesn't mean any process changes, it's mainly a naming thing. There will be a follow-up email discussing the feature set we're targetting which hopefully will make it clear why we're calling the release 2.0. We'll be discussing this schedule on 9am PST gaia meeting on tuesday, as well as the coordination meeting 5pm PST meeting also on tuesday. Feedback there is very much encouraged. Here's the schedule that we've discussed and cleared internally and with partners: Mar 17 - April 28: Stabilization for 1.3T/1.3/1.4 April 29: Official dev start for 2.0 April 29 - June 8: 2.0 development June 9: 2.0 branch date June 9 - July 21: Stabilization for 2.0 July 22: Official dev start for 2.1 Since it's important to understand what these various dates mean, here's a more detailed explanation: Mar 17 - April 28: Stabilization for 1.3T/1.3/1.4 This time is intended for fixing any critical bugs that come up. I.e. bugs that we find during our and partner testing. If you don't get any 1.3T/1.3/1.4 bugs assigned to you in this time period, please do start on 2.0 work. This can either be new features, or working on the top 25 backlog items of technical debt. It's up to each team to do the tradeoff between technical debt, polish and creating a product that's joyful to use vs. adding new features. However since we know that we will be finding plenty of 1.3T/1.3/1.4 issues, keeping this time scheduled for those releases allows us to keep a more realistic schedule. April 29: Official dev start for 2.0 This date mostly exists to help our planning. No actual rule changes happening on this date. The rules both before and after April 29th is that 1.3/1.4 bugfixing is highest priority, and 2.0 development comes after that. This date also works as a useful checkpoint. If your team still has a long list of blockers for 1.3T/1.3/1.4 then we should immediately reasses if we should move some of the 2.0 targetted features to a later release. The flip side of this is of course that we need to get better at finding bugs earlier. Bugs that we don't find until after this date is problematic. We're working with QA and partners to try to get bugs to developers earlier. April 29 - June 8: 2.0 development Development happens on m-c and gaia-master. Throughout this we should aim to have a policy of no known regressions. I.e. if a patch to implement a new feature lands, but the feature is buggy or it breaks other things, then it should be immediately backed out or disabled. The goal is that by the end of this period the 2.0 development should be done. I.e. it's not enough to have landed enough patches to get the feature mostly working. We should aim to always keep m-c/gaia-master to be of release quality as best we can. Whenever something isn't working quite as it should, it slows down other developers. When landing big new features, please consider landing them turned off by default until they have gotten wider testing and are of sufficient quality to be turned on. This also means that you'll want to have your initial feature landings happening a week or two before June 8 so that any followup work can be done before June 8th. If you have a big new feature or refactoring patch that's ready just a few days before June 8th, consider waiting to land it, or waiting to turn it on, until after June 8th so that it goes into the next release instead. It's better to make big changes early in the next release, than late in this one. June 9: 2.0 branch date Gaia 2.0 branch created. Mozilla-central goes to aurora branch. Actual branching happens sometime during the 9th pacific timezone, so if you want to make sure to have your patches in the 2.0 release it needs to land on the 8th or before that. Joint triages begin. Soft String freeze, try to get all strings landed before this date. Anything beyond this should have the late-l10n keyword. Beyond this date we will be restrictive about what gets uplifted to the 2.0 release. So don't count on your patches getting uplifted if it's not for a blocker. This applies both to gaia and gecko patches. June 9 - July 21: Stabilization for 2.0 The top priority is to help out with the 2.0 release. That doesn't just include bugs that for sure are in your code. It also includes helping out with understanding bugs that we don't know what they are caused by. And of course if you can spend time on helping other teams that are overwhelmed with blockers, then that will help the project a lot. However if you don't have any blockers, and you can't effectively help if you don't have any critical bug work to do, then feel free to start on 2.1 development, or do code maintenance, or write more tests. July 22: Official dev start for
Re: MOZ_ASSUME_UNREACHABLE is being misused
On 3/28/14 5:42 PM, Benoit Jacob wrote: So... maybe MOZ_MAYBE_PLAY_STARCRAFT? I'd like to suggest MOZ_IF_REACHED_EXPLODE_COMPUTER. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Warning about mutating the [[Prototype]] of an object ?
(2014/03/29 12:27), Boris Zbarsky wrote: On 3/28/14 6:42 PM, Neil wrote: Boris Zbarsky wrote: But slow here is a relative term. Last I measured, something like a basic property get in Ion code went from 2-3 instructions to 20-30 instructions when we deoptimize. ... What about calls from xpconnect (the original object was a component exposed as a chrome javascript global property)? Calls from JS into C++ via XPConnect (so XPCWrappedNative and company) take about 1500-3000 instructions last I measured. Calls from C++ into JS (via XPCWrappedJS) are in the 300-500 instruction range last I measured. If you're going through XPConnect, then property access deoptimizations due to [[Prototype]] sets are the least of your performance problems. -Boris I posted the original question. It seems there are a few approaches, but given the information above, they don't really make a difference, will they. I will be monitoring Bug 939072, and Bug 984146 and if some consensus emerges, try to reflect that into TB C-C source code. To be honest, I have no idea what kind of slow down is experienced (well not that there is a version without this slow down for easy comparison). TIA ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform