Re: OMTC on Windows
On Monday, May 19, 2014 4:36:49 AM UTC+3, Boris Zbarsky wrote: On 5/18/14, 2:23 PM, Gavin Sharp wrote: OMTC is important, and I'm excited to see it land on Windows, but the Firefox and Performance teams have just come off a months-long effort to make significant wins in TART, and the thought of taking a 20% regression (huge compared to some of the improvements we fought for) is pretty disheartening. My question here is whether we have data that indicates why there is a regression. Are we painting more, or are we waiting on things more? In particular, if I understand correctly TART uses a somewhat bizarre configuration: it tries to run refresh drivers at 10,000 frames per second (yes, 10kHz). That may not going interact at all well with compositing at 60Hz, and I'm not even sure how well it'll interact with the work to trigger the refresh driver off vsync. In any case, it's entirely possible to get regressions on TART that have nothing to do with actual slowdowns at normal frame rates. That may not be the case here, but it's a distinct possibility that it is. For example, on Mac we ended up special-casing the TART configuration and doing non-blocking buffer swaps in it (see bug 99) precisely because otherwise TART ended up gated on things other than actual rendering time. I would not be terribly surprised if something like that needs to be done on Windows too... -Boris It's not really 10K. It just means that the refresh driver sets the timeout to its next iteration as 0 or 1 ms, instead of aiming at 16-17 ms intervals. On OS X it indeed also includes non-blocking swap, and on windows - I don't know. We call it ASAP mode, and it's used on several tests (tscrollx, tsvgx, TART, CART). Since we started using it, those tests became much more sensitive and much better in detecting perf changes, with very few false positives, if at all. But ASAP is not that weird. Except for the non-blocking swap, the refresh driver can set its intervals to 0-1 ms also under normal conditions - when the load is high. In ASAP mode we explicitly induce this state even when the load is low. Without this mode, those tests would (and have) just flatline around 16.7 ms, thus not detecting regressions (or improvements) and in practice quite useless. Sure, it's different than normal mode, and possibly changes some internal balances into less than optimal ones, but overall, for measuring rendering throughputs (and changes of those), we don't have a better tool right now, and it has proved useful and reliable. Its problem, however, is that the numbers we measure with OMTC are not necessarily comparable to the numbers without OMTC. As long as it's without OMTC - it's reliable, and as long as it's with OMTC it's also reliable (bug 946567). But the perf changes which we measure on the switch itself might not be reliable, if only for the fact that defining throughput is not easy on OMTC. The gfx/layout guys are aware of these factors, and we occasionally try to check if we could come up with something better than ASAP mode, but so far there isn't something better which we're aware of. As I mentioned earlier, for this kind of switch, it would help to use few real machines with different human eyes looking at them and assessing if there's an observable difference. - avih ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: ResourceStats API
Given that we're only planning on exposing this to certified apps, i.e. only to apps that are written as part of gaia, I don't think we need to do an Intent to implement for this. If/when we do decide to expose this to the normal web, we should do an Intent to implement at that time. / Jonas On Sun, May 18, 2014 at 7:06 PM, Borting Chen boc...@mozilla.com wrote: Summary: ResourceStats API supports resource statistics and cost control for network usage and power consumption. Resource statistics provides resource usage information about the whole system, a system service, or an application. Cost control notifies user when resource usage exceeds a defined threshold. Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=951976 Link to standard: N/A Link to public discussion: https://groups.google.com/forum/#!topic/mozilla.dev.webapi/tWkgbD1v_Gg Platform coverage: Firefox OS Estimated or target release: TBD Preference behind which this will be implemented: dom.resource_stats.enabled -- Borting Chen Intern of Mozilla Taiwan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On 5/18/2014 3:16 AM, Bas Schouten wrote: remove a lot of code that we've currently been duplicating. Furthermore it puts us on track for enabling other features on desktop like APZ, off main thread animations and other improvements. What is APZ? Is OMTC turned on in all graphics setups, accelerated and not? Are we testing browser performance/responsiveness in these setups? There's several bugs still open (some more serious than others) which we will be addressing in the coming weeks, hopefully before the merge to Aurora. I want to call out one specifically, bug 933733 (and related bug 912521) which we know is going to regress; we currently don't know how to fix and we're not sure why it's happening. If anyone experiences the Firefox UI freezing unless they are moving the mouse, please let us know ASAP in the bug and we'll work with you to collect trace logs and try and pinpoint a solution. - Memory numbers will increase somewhat, this is unavoidable, there's several steps which have to be taken when doing off main thread compositing (like double-buffering), which inherently use more memory. I am concerned about this in general, because we know that OOM is a real problem for many of our users currently, and we have very poor metrics on memory usage in the wild. So I have a couple questions: * Is it a fair statement to say that the primary benefit of OMTC is in browser responsiveness and jank? * Are there settings/knobs which which can reduce the memory usage of this feature (other than disabling OMTC completely)? If so, do we have a plan for tuning those knobs on beta before this hits release? * How will we know whether OMTC is a net win for users on low-memory computers, where increased memory usage and paging might offset the responsiveness benefits? * Are there accurate about:memory reporters for OMTC buffers? --BDS ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Gecko style: Braces with enums and uninons
On 5/18/2014 11:16 PM, Dave Hylands wrote: My interpretation of this is that the only time braces go on the end of the line is when you're starting a control structure https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Control_Structures structs, enums, classes, and functions are not control structures, so the opening brace should be at the appropriate indentation level on a line of its own. I agree with this interpretation. Please feel free to clarify the style guide if there is an obvious way to do so. --BDS ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Do we still need Trace Malloc?
Hi, Do we still need Trace Malloc? I suspect it's barely used these days. For memory profiling, we have about:memory and DMD. For shutdown leak detection we have ASAN and Valgrind. Trace Malloc is documented here: https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Debugging_memory_leaks#Trace_Malloc Trace Malloc is enabled for all TBPL debug builds. I'm not sure why; I did a try run today where I disabled it and it was green. It's used to get stacks within the deadlock detector, but I'm not sure if that's necessary, and it doesn't seem like it would be that hard to replace if it is necessary. There's also Leaky, which is documented on the abovementioned wiki page. I think it works in tandem with Trace Malloc, and may be a candidate for removal as well. Thanks. Nick ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Recommended Try practices
While working to track down various job backlogs and busted pushes in our CI infrastructure, our team has observed some common anti-patterns in people's TryServer usage that contribute to these problems. In order to try to help developers find a balance between over-using and under-using Try, we have collected these patterns into a wiki page that will hopefully clarify what is recommended to ensure that a patch is less-likely to cause bustage while not needlessly wasting machine resources: https://wiki.mozilla.org/Sheriffing/How:To:Recommended_Try_Practices Note that while the page above is designed to give general recommendations for what to do, it is *NOT* a substitute for using your best judgement as a developer. We just ask that you be conscientious about the impact your pushes have on others, both from a resource utilization standpoint and a bustage (i.e. tree closure) standpoint, and factor that into deciding what jobs to run. As always, if you have any questions, you can always feel free to ping myself (RyanVM) or anybody else from our sheriffing team (edmorley, philor, KWierso, and Tomcat) on IRC or via email with any questions and we'd be happy to assist :) Thanks! -Ryan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Update on sheriff-assisted checkin-needed bugs
As many of you are aware, the sheriff team has been assisting with landing checkin-needed bugs for some time now. However, we've also had to deal with the fallout of a higher than average bustage frequency from them. As much as we enjoy shooting ourselves in the foot, our team has decided that we needed to tweak our process a bit to avoid tree closures and wasted time and energy. Therefore, our team has decided that we will now require that a link to a recent Try run be provided when requesting checkin before we will land the patch. To be clear, this *ONLY* affects checkin-needed bugs where we're assisting with the landing. We have no desire to police what other developers do before pushing. As has always been the case, developers are expected to ensure that their patches have received adequate testing prior to pushing whether they are receiving our assistance or not. Our team is also not going to dictate which specific builds/tests are required. We're not experts in your code and we'll defer to your judgment as to what counts as sufficient testing. As mentioned earlier today in another post, if in doubt, we do have a set of general best practices for Try that can be used as a guide [1]. We just want to ensure that patches have at least received some baseline level of testing before being pushed to production. We've been testing the water with this policy for the past couple weeks and have already seen a reduction in the number of backouts needed. For those of you mentoring bugs for new contributors, please also keep this in mind in order to keep patches from being held up in landing. And consider vouching for Level 1 commit access to further empower those contributors! Thanks! -Ryan [1] https://wiki.mozilla.org/Sheriffing/How:To:Recommended_Try_Practices ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Argument validation as a JSM?
On 05/15/2014 10:58 AM, ajvinc...@gmail.com wrote: Re: readability, that's something to think about, but when I write code like this: if ((typeof num != number) || (Math.floor(num) != num) || isNaN(num) || (num 0) || Math.abs(num) == Infinity) { throw new Error(This need to be a non-negative whole number); } Well, it adds up. :) Even now I can replace the fifth condition with !Number.isFinite(num). FWIW, the newish function Number.isInteger covers every requirement here except the fourth: http://people.mozilla.org/~jorendorff/es6-draft.html#sec-number.isinteger Also FWIW, I agree with your main point about boilerplate: even if you use the nice standard library, you end up with if (!Number.isInteger(num) || num 0) { throw new Error(handwritten error message); } and I would rather see checkArgIsNonNegativeInteger(num); But I'm not the one you have to convince. -j ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
RE: Application MAR-file auto Update not working from 29.0 to 29.0.1
Use the latest mar generation code that is in the repositories. -Original Message- From: dev-platform [mailto:dev-platform-bounces+rstrong=mozilla@lists.mozilla.org] On Behalf Of Manish Sent: Saturday, May 17, 2014 2:14 AM To: dev-platform@lists.mozilla.org Subject: Re: Application MAR-file auto Update not working from 29.0 to 29.0.1 Hi Robert. Yes update.manifest file seems to be missing. Any solution for this. On Thursday, 15 May 2014 22:43:43 UTC+5:30, Robert Strong wrote: That error is due to not having an update manifest in the mar and implies that the update mar wasn't generated with a manifest. http://mxr.mozilla.org/mozilla-central/source/toolkit/mozapps/update/updat er/updater.cpp#3700 Robert -Original Message- From: dev-platform [mailto:dev-platform-bounces+rstrong=mozilla@lists.mozilla.org] On Behalf Of Manish Sent: Thursday, May 15, 2014 5:47 AM To: dev-platform@lists.mozilla.org Subject: Application MAR-file auto Update not working from 29.0 to 29.0.1 Hi, This is my first post in this forum so please let me know if anything is wrong. I have been working on a desktop application named AZARDI which is based on xulrunner. It is an offline ePub Reader for all three major platforms (Windows, Mac, Linux). You can check it out at http://azardi.infogridpacific.com/. I have been deploying the application successfully with xulrunner 29.0 and earlier along with the mar auto-update. Now I want to deploy it with xulrunner 29.0.1 and thats when I am facing the problem. The mar auto-update thing seems not to be working across all the platforms. Once the mar-update is downloaded and I restart the application, the updates didn't get applied and when I went on to check the last-update.log, this is what I found: SOURCE DIRECTORY C:\Users\$USER$\AppData\Local\\.\\\updates\0 DESTINATION DIRECTORY C:\Program Files\infogridpacific\AZARDI DoUpdate: error extracting manifest file failed: 6 calling QuitProgressUI Does anyone know if there is any change in auto-update strategy after xulrunner 29.0 ? And is there any other solution to the above problem? Any help will be useful. Thanks in advance. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Argument validation as a JSM?
On May 19, 2014, at 20:20, Jason Orendorff jorendo...@mozilla.com wrote: On 05/15/2014 10:58 AM, ajvinc...@gmail.com wrote: Re: readability, that's something to think about, but when I write code like this: if ((typeof num != number) || (Math.floor(num) != num) || isNaN(num) || (num 0) || Math.abs(num) == Infinity) { throw new Error(This need to be a non-negative whole number); } Well, it adds up. :) Even now I can replace the fifth condition with !Number.isFinite(num). FWIW, the newish function Number.isInteger covers every requirement here except the fourth: http://people.mozilla.org/~jorendorff/es6-draft.html#sec-number.isinteger Also FWIW, I agree with your main point about boilerplate: even if you use the nice standard library, you end up with if (!Number.isInteger(num) || num 0) { throw new Error(handwritten error message); } and I would rather see checkArgIsNonNegativeInteger(num); But I'm not the one you have to convince. Seems there should be existing JS libs for this if it is a common use-case. --tobie ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Argument validation as a JSM?
On Monday, May 19, 2014 11:19:33 AM UTC-7, Jason Orendorff wrote: But I'm not the one you have to convince. Who is? :-) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Update on sheriff-assisted checkin-needed bugs
(Reducing the thread scope for the followup) One issue I often run into is that the contributor doesn't have access to try, and pushing it on their behalf disrupts the rhythm of the other things I'm doing. If we go forward with this, can we also get some kind of sheriff-assisted try push flag? Something like try-needed? On Fri, May 16, 2014 at 1:54 PM, Ryan VanderMeulen rvandermeu...@mozilla.com wrote: As many of you are aware, the sheriff team has been assisting with landing checkin-needed bugs for some time now. However, we've also had to deal with the fallout of a higher than average bustage frequency from them. As much as we enjoy shooting ourselves in the foot, our team has decided that we needed to tweak our process a bit to avoid tree closures and wasted time and energy. Therefore, our team has decided that we will now require that a link to a recent Try run be provided when requesting checkin before we will land the patch. To be clear, this *ONLY* affects checkin-needed bugs where we're assisting with the landing. We have no desire to police what other developers do before pushing. As has always been the case, developers are expected to ensure that their patches have received adequate testing prior to pushing whether they are receiving our assistance or not. Our team is also not going to dictate which specific builds/tests are required. We're not experts in your code and we'll defer to your judgment as to what counts as sufficient testing. As mentioned earlier today in another post, if in doubt, we do have a set of general best practices for Try that can be used as a guide [1]. We just want to ensure that patches have at least received some baseline level of testing before being pushed to production. We've been testing the water with this policy for the past couple weeks and have already seen a reduction in the number of backouts needed. For those of you mentoring bugs for new contributors, please also keep this in mind in order to keep patches from being held up in landing. And consider vouching for Level 1 commit access to further empower those contributors! Thanks! -Ryan [1] https://wiki.mozilla.org/Sheriffing/How:To:Recommended_Try_Practices ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Update on sheriff-assisted checkin-needed bugs
Try-from-bugzilla would be awesome! / Jonas On Mon, May 19, 2014 at 1:53 PM, Bobby Holley bobbyhol...@gmail.com wrote: (Reducing the thread scope for the followup) One issue I often run into is that the contributor doesn't have access to try, and pushing it on their behalf disrupts the rhythm of the other things I'm doing. If we go forward with this, can we also get some kind of sheriff-assisted try push flag? Something like try-needed? On Fri, May 16, 2014 at 1:54 PM, Ryan VanderMeulen rvandermeu...@mozilla.com wrote: As many of you are aware, the sheriff team has been assisting with landing checkin-needed bugs for some time now. However, we've also had to deal with the fallout of a higher than average bustage frequency from them. As much as we enjoy shooting ourselves in the foot, our team has decided that we needed to tweak our process a bit to avoid tree closures and wasted time and energy. Therefore, our team has decided that we will now require that a link to a recent Try run be provided when requesting checkin before we will land the patch. To be clear, this *ONLY* affects checkin-needed bugs where we're assisting with the landing. We have no desire to police what other developers do before pushing. As has always been the case, developers are expected to ensure that their patches have received adequate testing prior to pushing whether they are receiving our assistance or not. Our team is also not going to dictate which specific builds/tests are required. We're not experts in your code and we'll defer to your judgment as to what counts as sufficient testing. As mentioned earlier today in another post, if in doubt, we do have a set of general best practices for Try that can be used as a guide [1]. We just want to ensure that patches have at least received some baseline level of testing before being pushed to production. We've been testing the water with this policy for the past couple weeks and have already seen a reduction in the number of backouts needed. For those of you mentoring bugs for new contributors, please also keep this in mind in order to keep patches from being held up in landing. And consider vouching for Level 1 commit access to further empower those contributors! Thanks! -Ryan [1] https://wiki.mozilla.org/Sheriffing/How:To:Recommended_Try_Practices ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Do we still need Trace Malloc?
On Monday 2014-05-19 07:25 -0700, Nicholas Nethercote wrote: Do we still need Trace Malloc? I suspect it's barely used these days. For memory profiling, we have about:memory and DMD. For shutdown leak detection we have ASAN and Valgrind. Trace Malloc is documented here: https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Debugging_memory_leaks#Trace_Malloc Trace Malloc is enabled for all TBPL debug builds. I'm not sure why; I did a try run today where I disabled it and it was green. It's used to get stacks within the deadlock detector, but I'm not sure if that's necessary, and it doesn't seem like it would be that hard to replace if it is necessary. Are you talking about removing it from the debug builds done on our infra, or removing it from the tree? I think the former is fine; I'd like to know more about the memory graph analysis abilities of ASAN before being ok with the latter. (It was enabled because we used to track shutdown leak metrics and some other metrics, but when we switched to tracking performance/memory metrics from graphs to tracking them via regression notices, regression notices weren't added for those measurements, so they just kept regressing without people noticing. At some point we then stopped running them because nobody was looking at them. I wouldn't mind having that metric tracked again in some way because it does catch some real leak regressions in addition to shutdown leaks.) There's also Leaky, which is documented on the abovementioned wiki page. I think it works in tandem with Trace Malloc, and may be a candidate for removal as well. I don't think leaky is in the tree anymore. (It was once in tools/leaky/.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: navigator.hardwareConcurrency
On Mon, May 12, 2014 at 5:03 PM, Rik Cabanier caban...@gmail.com wrote: Primary eng emails caban...@adobe.com, bugm...@eligrey.com *Proposal* http://wiki.whatwg.org/wiki/NavigatorCores *Summary* Expose a property on navigator called hardwareConcurrency that returns the number of logical cores on a machine. *Motivation* All native platforms expose this property, It's reasonable to expose the same capabilities that native applications get so web applications can be developed with equivalent features and performance. *Mozilla bug* https://bugzilla.mozilla.org/show_bug.cgi?id=1008453 The patch is currently not behind a runtime flag, but I could add it if requested. *Concerns* The original proposal required that a platform must return the exact number of logical CPU cores. To mitigate the fingerprinting concern, the proposal was updated so a user agent can lie about this. In the case of WebKit, it will return a maximum of 8 logical cores so high value machines can't be discovered. (Note that it's already possible to do a rough estimate of the number of cores) Here's the responses that I sent to blink-dev before you sent the above email here. For what it's worth, in Firefox we've avoided implementing this due to the increased fingerprintability. Obviously we can't forbid any APIs which increase fingerprintability, however in this case we felt that the utility wasn't high enough given that the number of cores on the machine often does not equal the number of cores available to a particular webpage. A better approach is an API which enables the browser to determine how much to parallelize a particular computation. and Do note that the fact that you can already approximate this API using workers is just as much an argument for that there are no additional fingerprinting entropy exposed here, as it is an argument for that this use case has already been resolved. Additionally many of the people that are fingerprinting right now are unlikely to be willing to peg the CPU for 20 seconds in order to get a reliable fingerprint. Though obviously there are exceptions. Another relevant piece of data here is that we simply haven't gotten high priority requests for this feature. This lowers the relative value-to-risk ratio. I still feel like the value-to-risk ratio here isn't good enough. It would be relatively easy to define a WorkerPool API which spins up additional workers as needed. A very simple version could be something as simple as: page.html: var wp = new WorkerPool(worker.js); wp.onmessage = resultHandler; myArrayOfWorkTasks.forEach(x = wp.postMessage(x)); worker.js: onmessage = function(e) { var res = doHeavyComputationWith(e.data); postMessage(res); } function doHeavyComputationWith(val) { ... } This obviously is very handwavey. It's definitely missing some mechanism to make sure that you get the results back in a reasonable order. But it's not rocket science to get this to be a coherent proposal. / Jonas ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Do we still need Trace Malloc?
On Mon, May 19, 2014 at 3:05 PM, L. David Baron dba...@dbaron.org wrote: Do we still need Trace Malloc? Are you talking about removing it from the debug builds done on our infra, or removing it from the tree? I'm aiming for the latter, though the former is a reasonable first step :) I think the former is fine; I'd like to know more about the memory graph analysis abilities of ASAN before being ok with the latter. (It was enabled because we used to track shutdown leak metrics and some other metrics, but when we switched to tracking performance/memory metrics from graphs to tracking them via regression notices, regression notices weren't added for those measurements, so they just kept regressing without people noticing. At some point we then stopped running them because nobody was looking at them. I wouldn't mind having that metric tracked again in some way because it does catch some real leak regressions in addition to shutdown leaks.) Andrew McCreight is working on getting LSAN enabled by default on TBPL (bug 976414). He's already found a bunch of leaks using it; see the blocking bugs. LSAN's docs are here: https://code.google.com/p/address-sanitizer/wiki/LeakSanitizer. There's not much info on how it works, though I suspect it's extremely similar to how Valgrind's leak checking works, which is described in some detail here: http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.leaks If you're still concerned, porting this feature of Trace Malloc to DMD is an option. But in general, tools like Valgrind and ASAN/LSAN do a better job of leak detection than simple malloc replacement tools like Trace Malloc, because they are able to do a conservative scan of addressable memory to find pointers. This means they can distinguish truly unreachable blocks from blocks that are still reachable but we didn't bother freeing. In practice, the number of blocks in the latter category is huge, and IME tools that can't distinguish the two cases are very difficult to use. There's also Leaky, which is documented on the abovementioned wiki page. I think it works in tandem with Trace Malloc, and may be a candidate for removal as well. I don't think leaky is in the tree anymore. (It was once in tools/leaky/.) Oh, I thought that tools/jprof/leaky.{cpp,h} was it, but it looks like bug 750290 removed it. I guess I can remove the references from the MDN page. (I can probably remove Purify from that page, too!) Nick ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Do we still need Trace Malloc?
On Mon, May 19, 2014 at 05:07:44PM -0700, Nicholas Nethercote wrote: On Mon, May 19, 2014 at 3:05 PM, L. David Baron dba...@dbaron.org wrote: Do we still need Trace Malloc? Are you talking about removing it from the debug builds done on our infra, or removing it from the tree? I'm aiming for the latter, though the former is a reasonable first step :) I think the former is fine; I'd like to know more about the memory graph analysis abilities of ASAN before being ok with the latter. (It was enabled because we used to track shutdown leak metrics and some other metrics, but when we switched to tracking performance/memory metrics from graphs to tracking them via regression notices, regression notices weren't added for those measurements, so they just kept regressing without people noticing. At some point we then stopped running them because nobody was looking at them. I wouldn't mind having that metric tracked again in some way because it does catch some real leak regressions in addition to shutdown leaks.) Andrew McCreight is working on getting LSAN enabled by default on TBPL (bug 976414). He's already found a bunch of leaks using it; see the blocking bugs. LSAN's docs are here: https://code.google.com/p/address-sanitizer/wiki/LeakSanitizer. There's not much info on how it works, though I suspect it's extremely similar to how Valgrind's leak checking works, which is described in some detail here: http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.leaks If you're still concerned, porting this feature of Trace Malloc to DMD is an option. But in general, tools like Valgrind and ASAN/LSAN do a better job of leak detection than simple malloc replacement tools like Trace Malloc, because they are able to do a conservative scan of addressable memory to find pointers. This means they can distinguish truly unreachable blocks from blocks that are still reachable but we didn't bother freeing. In practice, the number of blocks in the latter category is huge, and IME tools that can't distinguish the two cases are very difficult to use. OTOH, *SAN tools are only running on Linux, and thus won't find any leaks that come from platform-dependent code. Mike ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: navigator.hardwareConcurrency
+1000! Thanks for articulating so clearly the difference between the Web-as-an-application-platform and other application platforms. Benoit 2014-05-19 21:35 GMT-04:00 Jonas Sicking jo...@sicking.cc: On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote: I don't see why the web platform is special here and we should trust that authors can do the right thing. I'm fairly sure people have already pointed this out to you. But the reason the web platform is different is that because we allow arbitrary application logic to run on the user's device without any user opt-in. I.e. the web is designed such that it is safe for a user to go to any website without having to consider the risks of doing so. This is why we for example don't allow websites to have arbitrary read/write access to the user's filesystem. Something that all the other platforms that you have pointed out do. Those platforms instead rely on that users make a security decision before allowing any code to run. This has both advantages (easier to design APIs for those platforms) and disadvantages (malware is pretty prevalent on for example Windows). / Jonas ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: navigator.hardwareConcurrency
On Mon, May 19, 2014 at 6:35 PM, Jonas Sicking jo...@sicking.cc wrote: On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote: I don't see why the web platform is special here and we should trust that authors can do the right thing. I'm fairly sure people have already pointed this out to you. But the reason the web platform is different is that because we allow arbitrary application logic to run on the user's device without any user opt-in. I.e. the web is designed such that it is safe for a user to go to any website without having to consider the risks of doing so. This is why we for example don't allow websites to have arbitrary read/write access to the user's filesystem. Something that all the other platforms that you have pointed out do. Those platforms instead rely on that users make a security decision before allowing any code to run. This has both advantages (easier to design APIs for those platforms) and disadvantages (malware is pretty prevalent on for example Windows) I'm unsure what point you are trying to make. This is not an API that exposes any more information than a user agent sniffer can approximate. It will just be more precise and less wasteful. For the high value system (= lots of cores), we intentionally limited the number of cores to 8. This number of cores is very common and most applications won't see much use above 8 anyway. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Do we still need Trace Malloc?
On Monday 2014-05-19 20:09 -0700, Nicholas Nethercote wrote: On Mon, May 19, 2014 at 5:32 PM, L. David Baron dba...@dbaron.org wrote: Another is being able to find the root strongly connected components of the memory graph, which is useful for finding leaks in other systems (e.g., leaks of trees of GTK widget objects) that aren't hooked up to cycle collection. It's occasionally even a faster way of debugging non-CC but nsTraceRefcnt-logged reference counted objects. How does trace-malloc do that? It sounds like it would need to know about object and struct layout. Roughly the same way a conservative collector would -- assuming any word-aligned memory in one object in the heap that contains something that's the address of something else in the heap (including in the interior of the allocation) is a pointer to that object in the heap. (It's actually done in the leaksoup tool outside of trace-malloc.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: navigator.hardwareConcurrency
On Mon, May 19, 2014 at 6:46 PM, Benoit Jacob jacob.benoi...@gmail.comwrote: +1000! Thanks for articulating so clearly the difference between the Web-as-an-application-platform and other application platforms. It really surprises me that you would make this objection. WebGL certainly would *not* fall into this Web-as-an-application-platform category since it exposes machine information [1] and is generally insecure [2] according to Apple and (in the past) Microsoft. Please note that I really like WebGL and not worried about these issues. Just pointing out your double standard. 1: http://renderingpipeline.com/webgl-extension-viewer/ 2: http://lists.w3.org/Archives/Public/public-fx/2012JanMar/0136.html 2014-05-19 21:35 GMT-04:00 Jonas Sicking jo...@sicking.cc: On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote: I don't see why the web platform is special here and we should trust that authors can do the right thing. I'm fairly sure people have already pointed this out to you. But the reason the web platform is different is that because we allow arbitrary application logic to run on the user's device without any user opt-in. I.e. the web is designed such that it is safe for a user to go to any website without having to consider the risks of doing so. This is why we for example don't allow websites to have arbitrary read/write access to the user's filesystem. Something that all the other platforms that you have pointed out do. Those platforms instead rely on that users make a security decision before allowing any code to run. This has both advantages (easier to design APIs for those platforms) and disadvantages (malware is pretty prevalent on for example Windows). / Jonas ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform