Re: Code Review Session
On Friday, May 24, 2013 8:05:31 AM UTC-7, Mike Conley wrote: Sounds like we're talking about code review. But I want to qualify integration into bugzilla: I explicitly do not want a tool that is tightly coupled to bugzilla. In fact, I want a tool that has as little to do with bugzilla as feasible. I'm a contributor to the Review Board project[1], which is not coupled with Bugzilla whatsoever. This quarter, we've dug into looking at upgrading the review experience in Bugzilla. And after wrestling some more with splinter and webkit's code-review.js tool, we decided that review board is a whole lot better approach. We're working with IT to stand up a review board server that Bugzilla will make use of, so it won't be tightly coupled (as I understand it). Cc'ing Glob and Mcote who can provide more details. It also has an extension called ReviewBot[2], which can run patches through static analysis or automated tests, and inject the results automatically into the review request as a ReviewBot review. That sounds like something else we should investigate in addition to Review Board. Cheers, Clint ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
On 5/28/13 7:39 AM, Benjamin Smedberg wrote: Bugzilla is our tracking tool of record. I'm personally rather bullish about bugzilla improvements, now that the 4.2 upgrade is done and we have solid people working on it and making weekly improvements. Yeah. It has its share of flaws, but it's also battle-tested, and has had significant recent improvements recently. I'd also given up on it in the past, but it's on a better path now. Multiple tools end up being confusing, and reliance on external tools carries risk that if they shut down we'll lose important history. (This is already Not Fun when spelunking in old Mozilla code, and finding that something landed without any bug number to explain what it was doing or why, nor context of the thinking or issues that let to the change.) I think it's fine (good, even!) to have small / experimental projects try new things, but the expectation should be that once those projects become non-experimental / production, they should return to the usual tools. (And we should be improving / expanding the usual tools to meet modern requirements.) Justin ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to make Firefox open in the foreground on Mac when launched from terminal
On 30.05.13 00:09, Ehsan Akhgari wrote: So I'd like to ask, if you care about this, which way would _you_ have as the default? Definitely with -foreground. At the moment I always add it manually. -Markus ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
New Windows 7 and XP try syntax
FYI Before: try: -b do -p win32 -u all[5.1,6.1] -t none Now: try: -b do -p win32 -u all[Windows XP,Windows 7] -t none This trigger jobs on the iX hardware rather than the Rev3 minis. FYI we disabled today jobs on Rev3 minis. The Try Syntax Chooser page has also been updated. cheers, Armen https://bugzilla.mozilla.org/show_bug.cgi?id=877465 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
On 5/23/2013 5:32 PM, Scott Johnson wrote: Members of dev-platform: As part of the Web Rendering work-week in Taiwan, we had a discussion of the process of code review, graciously led by roc. If you were unable to attend, or were able to attend and would like to review the proceedings, notes are available here: https://etherpad.mozilla.org/TaiwanWorkWeekCodeReview Special thanks to Anne van Kesteren and Daniel Holbert for assisting me in the note-taking when my laptop battery died. ~Scott Video of the session is available at: http://people.mozilla.org/~cpearce/CodeReviews-Taipei-May2013.mp4 (1 hour 10mins duration, 346MB) Chris P. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Hide Windows 7 and Windows XP jobs for Rev3 minis
I did not do this last night. I will be doing this now and take into account the request of mbrubeck. On 2013-05-22 11:38 AM, Matt Brubeck wrote: On 5/22/2013 10:22 AM, Armen Zambrano G. wrote: We would like to determine if we are ready to stop running jobs on the rev3 minis before stopping them. If you have no objections I will hide the builders by EOD. We can re-visit on Monday if we are ready to go ahead and stop running jobs on rev3 minis. My one concern is bug 859571, which prevents us from getting useful ts_paint data on the iX talos slaves: https://bugzilla.mozilla.org/show_bug.cgi?id=859571 If possible, we should keep the talos other and talos dirtypaint jobs running and visible on the old hardware until that is fixed. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Modules for workers
On 5/28/2013 5:08 AM, David Rajchenbach-Teller wrote: On 5/27/13 7:34 PM, Jonas Sicking wrote: The alternative is to use C++ workers. This doesn't work for addons obviously, but those aren't yet a concern for B2G. Well, my main concern is front-end- and add-on-accessible code. Normally, it shouldn't influence B2G. Weren't we moving addons into separate processes anyway? This has been discussed, but I haven't heard from this since in ages. It relied on e10s support and so was deprioritised when e10s was. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Can't get the xpcom service with do_GetService
On 29/05/13 08:22 , Hao Dong wrote: I try to use the following code to get the the jsdIDebuggerService, but it failed nsresult rv; const char jsdServiceCtrID[] = @mozilla.org/js/jsd/debugger-service;1; nsCOMPtr jsds = do_GetService(jsdServiceCtrID, rv); who can help me solve it? I try not to do what I'm about to do (not answer your question), but in this case I feel very strongly compelled: Don't use the old JSD1 debugger service anymore. It is very problematic from a performance perspective, and has been replaced by JSD2. I don't know up until what point the old API will continue to be supported, but I dearly hope that at it will in fact disappear in the future. More info on the new API is here: https://wiki.mozilla.org/Debugger And no, this API is not available from C++, as far as I can tell. What problem are you trying to solve? ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Can't get the xpcom service with do_GetService
Gijs Kruitbosch wrote: Don't use the old JSD1 debugger service anymore. It has been replaced by JSD2. Do you mean replaced in the sense of here's how to replace what you were doing in JSD1 or in the sense of JSD2 is cool! Let's drop JSD1? -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
On 5/29/2013 1:50 PM, Benoit Girard wrote: The clang-format list tells me that there are near-term plans for a standalone clang-tidy utility built on clang-format that will do much of what we're looking for as far as basic code-cleanup goes. I'm asking around about what near-term means, and if the answer isn't good enough I'm going to try to add something to clang-format to give users some moz-style guidance as a temporary measure. I'd be happy to replace what's I'll be rolling out in bug 875605 once something better comes along. But c++ isn't new so I'm not holding my breath :). From: Daniel Jasper djasper-hpiqsd4aklfqt0dzr+a...@public.gmane.org Subject: [PATCH] Initial clang-tidy architecture Date: Wed, 29 May 2013 03:30:35 -0700 I call that near-term. :-) -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Platform meeting changes effective June 2013
On 5/29/13 9:05 PM, Brad Lassey wrote: On 5/29/13 11:42 PM, Anthony Jones wrote: What is the attendance like from the Asia-Pacific region? I can't recall anyone from Asia-Pacific who is a regular attendee. It is extremely difficult to (expect anyone to, or otherwise) regularly attend a meeting that is held at 3am local time. That's why the notes are great for catching up on what was discussed, which is what I do whenever I'm in that region. Just my 2 cents, -Gary ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: We should drop MathML
On 5/28/13 8:22 AM, Benoit Jacob wrote: When I started this thread, I didn't even conceive that one would want to apply style to individual pieces of an equation. Someone gave the example of applying a color to e.g. a square root sign, to highlight it; I don't believe much in the pedagogic value of this kind of tricks --- that sounds like a toy to me --- but at this point I didn't want to argue further, as that is a matter of taste. Note that I've seen at least one PhD dissertation that made good use of color to highlight which terms of a 2-page-long equation canceled each other to produce the final (much shorter; iirc it was 0) result. Of course that was in the PDF version; the print version had to be black-and-white, and was a lot harder to follow. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Disable this Thursday Windows 7 and Windows XP on Rev3 machines
This is live since this morning. Thank you all that helped make this happen. http://armenzg.blogspot.com/2013/05/kiss-old-testing-infra-revision-3-mac.html On 2013-05-28 1:08 PM, Armen Zambrano G. wrote: (posting to dev.platform and dev.tree-management) Hello all, We have been running unit tests and talos jobs on the iX hardware for a while side by side with the Rev3 minis. We have disabled the Rev3 minis jobs for few days in on our main tbpl pages and we have not missed them AFAIK. We have some known intermittent oranges but nothing that has been shown as a show-stopper. Unless I hear any strong objections we will be stop running jobs on the rev3 minis for FF23 (m-a) FF24 (m-c) based projects. best regards, Armen Mozilla's Release Engineering ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Pink pixel of death (reftest failures due to bad RAM)
Hi all, We have found that sometimes we fail our reftest tests due to a couple of pixels getting store in bad sectors on the RAM of our machines. We have also seen garbage collection crashes due to it. We have also recently discovered that memtest has done a better job at catching bad RAM compared to Apple and other diagnostic tools. The new problem we have is that it also started happening on Windows iX machines (one machine so far). Do you have any ideas on how to make our tests to handle this problem in a better way? Do you have any suggestions of a better tool for memory issues on Windows? Do you have any ideas on how to quickly check if a memory replacement has fixed the issue? best regards, Armen [1] https://tbpl.mozilla.org/php/getParsedLog.php?id=23485895tree=Mozilla-Central#error1 [2] https://bugzilla.mozilla.org/show_bug.cgi?id=857705 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to make Firefox open in the foreground on Mac when launched from terminal
On 05/29/2013 06:09 PM, Ehsan Akhgari wrote: Typically when you use the terminal to open an application on Mac, the application is opened in the background. This means that for example when you use mach run or mach debug, you need to either use the mouse or a painful sequence of keyboard shortcuts in order to get to the Firefox window that is just opened. As a keyboard user, I find this extremely painful in my day to day work, and recently I'm feeling the burden even more given how easy it is these days to run Firefox through mach. We currently support a Mac only command line flag called -foreground which makes Firefox's window open in the foreground by default when run from the Terminal. I filed bug 863754 to ask for mach pass -foreground by default, but Steven brought up the excellent point that this behavior may actually be desired by other people. So I'd like to ask, if you care about this, which way would _you_ have as the default? FWIW, I don't believe this behavior makes much difference to our users, since they don't typically run Firefox from the command line, so I consider this mostly a choice which should be influenced by our developers. Also, note that there are some Mac applications which do already open in the foreground by default. Cheers, -- Ehsan http://ehsanakhgari.org/ I've never known about -foreground until now. I would be thrilled if it were the default behaviour. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/30/13 8:04 PM, Joshua Cranmer wrote: I can't see how they are a good alternative. With patch queues, I can maintain a complex refactoring in a patch queue containing dozens of smallish patches. In particular, I can easily realize I made a mistake in patch 3 while working on patch 21 and make sure that the fix ends up in patch 3; I don't see how that is easily achievable in git branches. Stacked Git (stgit) is a good compromise between git's lightweight branches and hq's patch queues. I wouldn't want to use git without it. I wrote about stgit here: http://www.cpeterso.com/blog/02013/03/stacked-git-mercurial-style-patch-queues-for-git/ chris peterson ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Pink pixel of death (reftest failures due to bad RAM)
On 5/30/2013 11:51 AM, Armen Zambrano G. wrote: Hi all, We have found that sometimes we fail our reftest tests due to a couple of pixels getting store in bad sectors on the RAM of our machines. We have also seen garbage collection crashes due to it. We have also recently discovered that memtest has done a better job at catching bad RAM compared to Apple and other diagnostic tools. The new problem we have is that it also started happening on Windows iX machines (one machine so far). Do you have any ideas on how to make our tests to handle this problem in a better way? Do you have any suggestions of a better tool for memory issues on Windows? Do you have any ideas on how to quickly check if a memory replacement has fixed the issue? best regards, Armen Dolske may have some ideas here ... https://twitter.com/dolske/status/339495877024563201 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Platform meeting changes effective June 2013
On 5/31/2013 3:25 AM, Lawrence Mandel wrote: 2. Work on summarizing discussions or vague items in the agenda so that the notes can be understood by people who don't attend the meeting Please let me know how the notes shape up. As someone who can't attend most meetings due to inconvenient time zones, I greatly appreciate good meeting notes, and I regularly try to read them. Thanks for taking charge of this! Cheers, Chris Pearce. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
On 5/24/13 10:50 AM, Justin Lebar wrote: Consider for example how much better harthur's fileit and dashboard tools [1] [2] are than bugzilla's built-in equivalents. [2] http://harthur.github.io/bugzilla-todos/ So I actually tried using the dashboard you link to for the last week. It's actually worse at least for my use case than what I do in bugzilla right now (which is just a saved search for review/feedback requests on me), because: 1) It does not show feedback requests. This may explain why some people are routinely ignoring them 2) It's a lot slower to load than a saved search. 3) If left open (see #2) it does a bunch of JS off timeouts, causing noticeable jank in my browser. All of which is to say that use cases apparently differ, since I assume for you the bugzilla-todos dashboard is in fact a lot better. Of course I would love something that did what this dashboard does in terms of showing bugs to check in and whatnot without the drawbacks. ;) We shouldn't conflate owning the PR data with integrating the PR tool into bugzilla. I think that depends on what integrating means. Ideally, we should have something that makes it easy, starting from a line of code to track back why that line of code is the way it is. In a perfect world that would mean correct blame, with the changeset linking to the discussion about the patch, reviews, design documents if they exist, etc, etc. We're not going to get there, if nothing else because a lot of that seems to be in email/irc/hallways, but the closer we can get the better. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: We should drop MathML
I think the main points were: 1) Not everybody use TeX as an input method. 2) Not everybody write the source of Web pages by hand. 3) People using TeX want its full power (defining macros, loading packages etc). So except in simple and limited cases, a small subset of TeX is not what people want. MathML already contains the core features for mathematical rendering and has been widely used for a long time. Concrete tools have been developed and shown that MathML can be used with other Web languages (HTML, SVG and CSS), with DOM/Javascript, to write mathematical search engines, can be generated from AsciiMath, can be used for accessibility (*), in WYSIWYG editor, in ebooks, for copy and paste (e.g. Microsoft Word, Mathematica/Mapple), in publishers's internal workflow etc For the LaTeX community, tools like LaTeXML (currently able to convert 94% of the arXiV papers) have shown that MathML can be generated from LaTeX and that the annotation element can even be used to expose and share the original LaTeX markup (e.g. for copy and paste). So your proposal of a small subset of TeX is mostly (depending on what you put exactly in the subset and how you map it to the DOM tree) a syntactical change, just like the RelaxNG XML form vs the RelaxNG compact form. As I said, I understand this can be helpful when one wants to quickly write a HTML page with math content and Javascript libraries like MathJax or AsciiMath have already addressed that need. With non-XML/SGML syntax, you're just making more difficult for authors/implementers to understand what will be the DOM tree you get at the end. So you're making life easier for people writing their Web pages by hand with no CSS or Javascript involved, but much more complicated in all the other cases. You claimed that MathML was not used and that replacing it by the more universal TeX syntax would make developing tools easier (like accessibility or copy and paste) while things that do not work well with TeX (like CSS) were not absolutely necessary. That just seemed misinfor mation about the current status of MathML and current use cases. You asked advice about the usefulness of MathML on the MathJax list and we tried to explain that it is indeed very important. Of course TeX is important too, but is not always adapted to a Web context (remember it was initially designed by Knuth to write books). As I said, I would prefer improving MathML, especially compatibility with CSS, than starting again from zero with a new math language for the Web and reinventing the wheel. (*) BTW, ChromeVox has recently been released with MathML support. So Google can now read the math but not display it: http://www.youtube.com/watch?v=YyWu9HB9QtU ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Disable this Thursday Windows 7 and Windows XP on Rev3 machines
(posting to dev.platform and dev.tree-management) Hello all, We have been running unit tests and talos jobs on the iX hardware for a while side by side with the Rev3 minis. We have disabled the Rev3 minis jobs for few days in on our main tbpl pages and we have not missed them AFAIK. We have some known intermittent oranges but nothing that has been shown as a show-stopper. Unless I hear any strong objections we will be stop running jobs on the rev3 minis for FF23 (m-a) FF24 (m-c) based projects. best regards, Armen Mozilla's Release Engineering ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: We should drop MathML
On 04/06/13 23:30, Jonas Sicking wrote: It would be cool to find a solution that makes the simple things simpler than MathML, while keeping the complicated things possible. Isn't the answer to that sort of question normally something like: a mini-language for simple math, plus a JS library you can include which will automatically turn it into MathML? And lo, there was ASCIIMath: http://www1.chapman.edu/~jipsen/mathml/asciimath.html Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Disabled Gaia UI tests on pandas - any reason for B2G *panda* builds?
Hi, Today, we disabled Gaia UI tests on the B2G pandas [1][2][3]. The reason is that they were broken and our testing plan is to test on Desktop builds rather than pandas [3]. At some point we used to run these jobs across all trees, then we hid them, then we only left them running on Cedar and Gaia-Master. If we don't have a need to test on panda boards, are there any reasons left to keep on creating the B2G panda builds? Looking forward to hear back from you. cheers, Armen https://mozillians.org/en-US/u/armenzg -- [1] https://tbpl.mozilla.org/?tree=Cedarshowall=1jobname=b2g_panda%20cedar%20opt%20test%20gaia-ui-test [2] https://tbpl-dev.allizom.org/?tree=Gaia-Master [3] http://pandaboard.org/node/300/#PandaES [4] A) It is easier to test on B2G Desktop builds than on Panda boards B) We don't gain anything from testing on Panda boards (no radio capabilities) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Being careful about adding new Web APIs, and how we are perceived
I know we already try to be, but it's worth seeing things like https://groups.google.com/a/chromium.org/forum/#!msg/chromium-dev/wDV9JHs0mBA/0iw5vRDThXYJ to be reminded that others may not realize that we are... Now the particular examples Greg cites are, I think, not really applicable, but it's worth keeping in mind that the perception is there. To the extent that we worry about this perception, what are good ways to deal with it, other than announcing an official policy on how we will add stuff and sticking to it? -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Master xpcshell.ini manifest removed
On 6/12/2013 3:11 PM, Gregory Szorc wrote: The master xpcshell.ini manifest has been removed from source control in bug 869635. The master manifest is now generated automatically by deriving the data from moz.build files. So, just add or remove a reference to an added/deleted xpcshell.ini in a moz.build file and things will just work. As a side note, B2G and Android still use their own xpcshell manifests, at least on the tests run by buildbot slaves. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Can't get the xpcom service with do_GetService
On 29/05/13 10:50 , Neil wrote: Gijs Kruitbosch wrote: Don't use the old JSD1 debugger service anymore. It has been replaced by JSD2. Do you mean replaced in the sense of here's how to replace what you were doing in JSD1 or in the sense of JSD2 is cool! Let's drop JSD1? The latter. Although just saying it's cool makes it sound like the only thing it has going for it is it being new, which wouldn't be true. And I am not aware of any new things being written using JSD1 rather than 2, even if the former hasn't been removed from the tree just yet. ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Can't get the xpcom service with do_GetService
I try to use the following code to get the the jsdIDebuggerService, but it failed nsresult rv; const char jsdServiceCtrID[] = @mozilla.org/js/jsd/debugger-service;1; nsCOMPtr jsds = do_GetService(jsdServiceCtrID, rv); who can help me solve it? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to make Firefox open in the foreground on Mac when launched from terminal
On 5/29/13 3:09 PM, Ehsan Akhgari wrote: Typically when you use the terminal to open an application on Mac, the application is opened in the background. Mmm, indeed. Someone told me long ago this this was a platform convention, but it sure is annoying. Seems like a nice refinement to me. I'd assume that the most common use-case for launching the browser is to, well, use the browser. If people don't want -foreground as a default, I'd like to understand why. (Loading a test page and focusing on console output is fair, and the -background proposed in the bug would help.) Justin ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Platform meeting changes effective June 2013
On 5/29/13 11:42 PM, Anthony Jones wrote: What is the attendance like from the Asia-Pacific region? I can't recall anyone from Asia-Pacific who is a regular attendee. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Heads up: difference in reference counting between Mozilla and WebKit worlds
On 6/19/2013 3:20 PM, Ehsan Akhgari wrote: On 2013-06-19 12:56 PM, Gregory Szorc wrote: On 6/18/13 9:05 PM, Anthony Jones wrote: On 19/06/13 16:02, Robert O'Callahan wrote: I believe that in Webkit you're not supposed to call new directly. Instead you call a static create method that returns the equivalent of already_AddRefed. Do they have a lint checker we can use for that? Last I checked they had a Clang plugin that emitted warnings/errors on certain coding convention violations. As of bugs 767563 and 851753, we have the same. Although, the builder is hidden and the functionality in the plugin is minimal. It would be rad if we could change both of those. Bug 880434 is tracking enabling those builds by default. Note that for now the only thing that our plugin checks in MOZ_STACK_CLASS, but more analyses are very welcome, I promise fast reviews if people write them! You've forgotten MOZ_MUST_OVERRIDE! And MOZ_NONHEAP_CLASS! :-) I've got code that adds checking for static initializers, but that's proving to be much, much harder to do than I first anticipated. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code coverage take 2, and other code hygiene tools
On 6/24/2013 8:50 PM, Clint Talbert wrote: Decoder and Jcranmer got code coverage working on Try[1]. They'd like to expand this into something that runs automatically, generating results over time so that we can actually know what our code coverage status is with our major run-on-checkin test harnesses. While both Joduinn and I are happy to turn this on, we have been down this road before. We got code coverage stood up in 2008, ran it for a while, but when it became unusable and fell apart, we were left with no options but to turn it off. I think one of the problems with the old code coverage stuff was that it got almost no visibility. Now, Jcranmer and Decoder's work is of far higher quality than that old run, but before we invest the work in automating it, I want to know if this is going to be useful and whether or not I can depend on the community of platform developers to address inevitable issues where some checkin, somewhere breaks the code coverage build. Do we have your support? Will you find the generated data useful? I know I certainly would, but I need more buy-in than that (I can just use try if I'm the only one concerned about it). Let me know your thoughts on measuring code coverage and owning breakages to the code coverage builds. If you are just attempting to cover C/C++ code, then code coverage amounts to a few extra flags in CFLAGS/CXXFLAGS/LDFLAGS. The biggest problem is that it increases runtime, which could push us past some timeout thresholds, and, if you use --disable-debug --enable-optimize='-g', some tests actually crash instead (mostly a set of tests in the addon manager). I'm personally a bit of a data visualization junky. One of the projects I've started doing but haven't completed is getting an animated video of how code coverage evolves in our test suite. I ran an experiment several years ago where I built something that approximated the Thunderbird nightly revision and ran code coverage on its tests and made a video of the results (my blog post describing this is here: http://quetzalcoatal.blogspot.com/2010/04/animated-code-coverage.html; the video is no longer available, but I still have all of the source material lying around). This also leads me to build tools like http://www.tjhsst.edu/~jcranmer/c-ccov/coverage.html?dir=mailnews; to be able to do these kinds of projects, I want essentially just need the LCOV .info files (I use my own HTML generation scripts since I found LCOV to be too slow for me, and I have other minor UI gripes), especially if coverage is broken down by testsuite [1]. Also, what do people think about standing up JSLint as well (in a separate automation job)? We should treat these as two entirely separate things, but if that would be useful, we can look into that as well. We can configure the rules around JSLint to be amenable to our practices and simply enforce against specific errors we don't want in our JS code. If the JS style flamewars start-up, I'll split this question into its own thread because they are irrelevant to my objective here. I want to know if it would be useful to have something like this for JS or not. If we do decide to use something like JSLint, then I will be happy to facilitate JS-Style flamewars because they will then be relevant to defining what we want Lint to do but until that decision is made, let's hold them in check. The code in mozilla-central and comm-central tends to aggressively use new JS features added to SpiderMonkey, such as yield, array comprehensions, etc. A brief test of JSLint shows that it doesn't support these features, which makes it a non-starter in my opinion. If we want JS static analysis tools running on our codebase, then they probably ought to be based on Reflect.parse in SpiderMonkey (which knows about these things), and arguably based on similar technology to the amo checker. [1] Another long-term project I've had but haven't put enough time into is getting this kind of data on all platforms, not just Linux64. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/30/2013 7:56 PM, Johnny Stenback wrote: [TL;DR, I think we need to embrace git in addition to hg for Firefox/Gecko hacking, what do you think?] Personally, the more I use git, the more I hate it. But I also see that other people love it... * Developers can use git branches. They just work, and they're a good alternative to patch queues. I can't see how they are a good alternative. With patch queues, I can maintain a complex refactoring in a patch queue containing dozens of smallish patches. In particular, I can easily realize I made a mistake in patch 3 while working on patch 21 and make sure that the fix ends up in patch 3; I don't see how that is easily achievable in git branches. * Developers can benefit from the better merge algorithms used by git. I don't know what you mean by that. Note that git's octopus merges are not directly realizable in hg, so an hg-git bidirectional mirror may need to ban such things in the incoming git. * Git works well with Github, even though we're not switching to Github as the ultimate source of truth (more on that below). Outside of the community size, I personally can't see any benefits to Github for Mozilla; if all you want is community outreach, you pretty much just need a read-only git mirror on Github, which we already have. Some of the known issues with embracing git are: * Performance of git on windows is sub-optimal (we're already working on it). * Infrastructure changes needed... * Retraining Mercurial developers to use git (if we went git-only) * Porting all of our Mercurial customizations to git for a check-in head * Updating all of our developer build documentation * Adapting build processes to adjust to the git way of doing things Option 1 is where I personally think it's worth investing effort. It means we'd need to set up an atomic bidirectional bridge between hg and git (which I'm told is doable, and there are even commercial solutions for this out there that may solve this for us). This is harder than it sounds: hg and git are subtly different in ways that can seriously frustrate users if you don't pay attention. For example (one that bit me very hard): hg records file copy/moves as part of the changeset, while git attempts to reverse engineer them when it needs to do so. So my attempt to merge my changes with a remote git repository via hg-git failed miserably since the hg local repo didn't catch the git moved repo. Mercurial also treats history as immutable, so you can't push history revisions. If all you need is a read-only mirror, these kind of changes aren't as important (you're not going to damage official records, just inconvenience some developers); for a two-way mirror, they are very much worth considering. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code coverage take 2, and other code hygiene tools
On 6/24/2013 6:50 PM, Clint Talbert wrote: Decoder and Jcranmer got code coverage working on Try[1]. They'd like to expand this into something that runs automatically, generating results over time so that we can actually know what our code coverage status is with our major run-on-checkin test harnesses. While both Joduinn and I are happy to turn this on, we have been down this road before. We got code coverage stood up in 2008, ran it for a while, but when it became unusable and fell apart, we were left with no options but to turn it off. Now, Jcranmer and Decoder's work is of far higher quality than that old run, but before we invest the work in automating it, I want to know if this is going to be useful and whether or not I can depend on the community of platform developers to address inevitable issues where some checkin, somewhere breaks the code coverage build. Do we have your support? Will you find the generated data useful? I know I certainly would, but I need more buy-in than that (I can just use try if I'm the only one concerned about it). Let me know your thoughts on measuring code coverage and owning breakages to the code coverage builds. Also, what do people think about standing up JSLint as well (in a separate automation job)? We should treat these as two entirely separate things, but if that would be useful, we can look into that as well. We can configure the rules around JSLint to be amenable to our practices and simply enforce against specific errors we don't want in our JS code. If the JS style flamewars start-up, I'll split this question into its own thread because they are irrelevant to my objective here. I want to know if it would be useful to have something like this for JS or not. If we do decide to use something like JSLint, then I will be happy to facilitate JS-Style flamewars because they will then be relevant to defining what we want Lint to do but until that decision is made, let's hold them in check. So, the key things I want to know: * Will you support code coverage? Would it be useful to your work to have a regularly scheduled code coverage build test run? Back when the previous code coverage stuff worked I actively used it to help me develop automated tests, I'd love to see this again. I wish we could get JS code coverage too though. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/30/2013 5:56 PM, Johnny Stenback wrote: Some of the known issues with embracing git are: * Performance of git on windows is sub-optimal (we're already working on it). This has become a bit of an urban legend; I often see it repeated but seldom with actual measurements. I don't think it's a valid reason to avoid git unless there are specific cases where git's performance on Windows is insufficient compared to hg's performance on Windows. In some brief tests using hg.mozilla.org/mozilla-central versus github.com/mozilla/mozilla-central on a modern Core i7 laptop with SSD and 8GB RAM, after throwing away the first result (so I'm measuring warm cache times; cold times would be useful too but would take more work for me to measure correctly): log the last 10,000 changes: git 0.745s hg 2.570s blame mobile/android/chrome/content/browser.xul: git 1.015s hg 0.830s diff with no changes: git 2.136s hg 2.001s status: git 3.011s hg 1.680s commit one-line change to configure.in: git 2.420s hg 3.911s clone from remote: git 26m43s hg 19m01s pull from remote to an up-to-date clone: git 1.585s hg 0.875s update working dir from tip to FIREFOX_AURORA_23_BASE: git 16.008s hg 25.704s There *are* some cases where git is worse than hg on Windows, but hg is as bad or worse for many common operations like log, diff, and commit. Overall I find both painful on Windows, but neither noticeably better than the other. (And of course some of these tests are highly unfair because the git repo has a more complete history than the hg one, or because they test network or server performance that is unpredictable and may vary between github and hg.m.o.) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
On 5/24/13 8:46 AM, Benoit Girard wrote: I've got some patches that import webkit's check-style script to check the style[1]. Google and Linux also have style lint scripts (cpplint.py [1] and checkstyle.pl [2] respectively) that don't depend on a particular compiler tool like clang-format. [1] https://google-styleguide.googlecode.com/svn/trunk/cpplint/cpplint.py [2] https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl cpeterson ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code coverage take 2, and other code hygiene tools
Joshua Cranmer wrote: if you use --disable-debug --enable-optimize='-g' Custom optimisation flags are not supported and should never have been used to turn on symbols anyway; you should use --disable-debug --enable-debug-symbols --disable-optimize instead. -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/31/13 10:14 PM, Johnny Stenback wrote: On 5/31/2013 12:32 AM, Mike Hommey wrote: [...] Option 1 is where I personally think it's worth investing effort. It means we'd need to set up an atomic bidirectional bridge between hg and git (which I'm told is doable, and there are even commercial solutions for this out there that may solve this for us). Assuming we solve the bridge problem one way or another, it would give us all the benefits listed above, plus developer tool choice, and we could roll this out incrementally w/o the need to change all of our infrastructure at once. I.e. our roll out could look something like this: 1. create a read only, official mozilla-central git mirror 2. add support for pushing to try with git and see the results in tbpl 3. update tbpl to show git revisions in addition to hg revisions 4. move to project branches, then inbound, then m-c, release branches, etc Another way to look at this would be to make the git repository the real central source, and keep the mercurial branches as clones of it, with hg-git (and hg-git supports pushing to git, too). This would likely make it easier to support pushing to both, although we'd need to ensure nobody pushes octopus merges in the git repo. Yup, could be, and IMO the main point is that we'd have a lot of flexibility here. Option 2 is where this discussion started (in the Tuesday meeting a few weeks ago, https://wiki.mozilla.org/Platform/2013-05-07#Should_we_switch_from_hg_to_git.3F). Since then I've had a number of conversations and have been convinced that a wholesale change is the less attractive option. The cost of a wholesale change will be *huge* on the infrastructure end, to a point where we need to question whether the benefits are worth the cost. I have also spoken with other large engineering orgs about git performance limitations, one of which is doing the opposite switch, going from git to hg. I bet this is facebook. Their usecase includes millions of changesets with millions of files (iirc, according to posts i've seen on the git list). I've promised not to mention names here, so I won't confirm nor deny... but the folks I've been talking to mostly have a repo that's a good bit less than a single order of magnitude larger than m-c, so a couple of hundred k files, not millions. And given the file count trend in m-c (see attached image for an approximation), that doesn't make me feel too good about a wholesale switch given the work involved in doing so. I wouldn't be surprised if depth of tree was an impacting perf, given how git stores directories, with trees, tree children and refs. I would think that 1000 files in the top level dir perform better than a 1000 files in 20 dirs depth. Axel ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Block cookies from sites I haven't visited
Note that this isn't actually enabled by default on beta or release builds. On 06/27/2013 12:55 PM, Jan Odvarko wrote: Firefox 22 introduced a new cookie feature that allows to block cookies from not-visited sites. Blog post here: https://brendaneich.com/2013/06/the-cookie-clearinghouse/ This change includes also different default value for network.cookie.cookieBehavior preference, which is now: 3 == limit foreign cookies --- I'd like to fix Firebug UI that is available for changing cookie permissions on a site-by-site bases (Firebug always applies on the current page). The question is what is the correct argument to pass to nsIPermissionManager.add() method to limit third party cookies for specific URI. I am using Ci.nsICookiePermission.ACCESS_LIMIT_THIRD_PARTY Is that correct? Honza ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Problem with undefined references when building native extension
Hi, I'm looking for a little help building a native extension. I'm getting the following 'undefined reference' errors when linking on a Linux 64bit platform (not tried others yet): Executing: c++ -Wall -Wpointer-arith -Woverloaded-virtual -Werror=return-type -Wtype-limits -Wempty-body -Wsign-compare -Wno-invalid-offsetof -Wcast-align -fno-exceptions -fno-strict-aliasing -fno-rtti -ffunction-sections -fdata-sections -fno-exceptions -std=gnu++0x -pthread -pipe -DDEBUG -D_DEBUG -DTRACING -g -fno-omit-frame-pointer -fPIC -shared -Wl,-z,defs -Wl,--gc-sections -Wl,-h,libccnx_prot.so -o libccnx_prot.so /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/tmpMkBkPL.list -lpthread -Wl,-z,noexecstack -Wl,--build-id -Wl,-rpath-link,/var/tmp/simon-central/obj-x86_64-linux-gnu/dist/bin -Wl,-rpath-link,/usr/lib -L/var/tmp/simon-central/obj-x86_64-linux-gnu/dist/bin -lxul -lmozalloc -L/var/tmp/simon-central/obj-x86_64-linux-gnu/dist/bin -L/var/tmp/simon-central/obj-x86_64-linux-gnu/dist/lib -lnspr4 -lplc4 -lplds4 -L/var/tmp/simon-central/obj-x86_64-linux-gnu/dist/lib -lnspr4 -lplc4 -lplds4 -lccn -Wl,-Bsymbolic -ldl /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/tmpMkBkPL.list: INPUT(CcnHandler.o) CcnHandler.o: In function `nsCString_external::get() const': /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/../../dist/include/nsStringAPI.h:879: undefined reference to `nsACString::BeginReading() const' CcnHandler.o: In function `ReentrantMonitor': /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/../../dist/include/mozilla/ReentrantMonitor.h:43: undefined reference to `mozilla::BlockingResourceBase::BlockingResourceBase(char const*, mozilla::BlockingResourceBase::BlockingResourceType)' CcnHandler.o: In function `~ReentrantMonitor': /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/../../dist/include/mozilla/ReentrantMonitor.h:56: undefined reference to `mozilla::BlockingResourceBase::~BlockingResourceBase()' CcnHandler.o: In function `ReentrantMonitorAutoEnter': /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/../../dist/include/mozilla/ReentrantMonitor.h:182: undefined reference to `mozilla::ReentrantMonitor::Enter()' CcnHandler.o: In function `~ReentrantMonitorAutoEnter': /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/../../dist/include/mozilla/ReentrantMonitor.h:187: undefined reference to `mozilla::ReentrantMonitor::Exit()' CcnHandler.o: In function `nsCreateInstanceByContractID': /var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn/../../dist/include/nsComponentManagerUtils.h:64: undefined reference to `vtable for nsCreateInstanceByContractID' /usr/bin/ld.bfd.real: CcnHandler.o: relocation R_X86_64_PC32 against undefined hidden symbol `vtable for nsCreateInstanceByContractID' can not be used when making a shared object /usr/bin/ld.bfd.real: final link failed: Bad value collect2: ld returned 1 exit status make[1]: *** [libccnx_prot.so] Error 1 make[1]: Leaving directory `/var/tmp/simon-central/obj-x86_64-linux-gnu/extensions/ccn' make: *** [default] Error 2 My Makefile.in is: DEPTH = @DEPTH@ topsrcdir = @top_srcdir@ srcdir= @srcdir@ VPATH = @srcdir@ include $(DEPTH)/config/autoconf.mk XPI_NAME = ccnx INSTALL_EXTENSION_ID = c...@ccnx.org DIST_FILES = install.rdf XPI_PKGNAME= ccnx-$(MOZ_APP_VERSION) LIBRARY_NAME= ccnx_prot USE_STATIC_LIBS = 1 IS_COMPONENT = 1 EXTRA_DSO_LDOPTS = \ $(MOZ_COMPONENT_LIBS) \ $(NSPR_LIBS) \ $(XPCOM_LIBS) \ -lccn include $(topsrcdir)/config/rules.mk ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Problem with undefined references when building native extension
protocolma...@gmail.com wrote: $(XPCOM_LIBS) \ I *think* this needs to be XPCOM_GLUE_LDOPTS instead. -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Windows XP virtual memory peak appears to be causing talos timeout errors for dromaeo_css
On 6/3/2013 7:28 AM, jmaher wrote: We have a top orange factor failure which is a talos timeout that only happens on windows XP and predominately on the dromaeo_css test. What happens is we appear to complete the test just fine, but the poller we have on the process used to manage firefox never indicates we have finished. After doing some screenshots and looking at the process list, I haven't found much except that on the failing cases the _Total value for Virtual Bytes Peak is 2GB, and for all the passing instances it is ~1.25GB. Are there other things I should look for, or things I could change to fix this problem? This *might* be related to bug 859955 which a few people have been actively working on recently: https://bugzilla.mozilla.org/show_bug.cgi?id=859955 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: We should drop MathML
Regarding EPUB3, I don't think anyone said the whole format should be supported natively in browsers. An EPUB file is basically just a set of HTML5 pages (HTML, SVG, MathML and CSS) packed into an archive, together with additional metadata to describe the ebook content (title, author, chapters, pages etc). So browsers are already able to render ebook pages as long as they support HTML5 and you don't need to build a Javascript rendering engine. Of course, the most obvious method to extend HTML5 with metadata is to use an XML format for the whole ebook (XHTML5 mixed with the ebook metadata using namespaces) but again you should not focus on the syntax ; another (non-XML) SGML format could have been used instead... The key idea is that mobile devices are using Web rendering engines and that Web formats are well-adapted to render documents on the screen, so relying on HTML5 is the most appropriate choice. Gecko is already able to render and edit HTML5 and as a consequence some Gecko-based applications exist to render or edit ebooks (see e.g. the EPUBReader Firefox add-on or BlueGriffon EPUB Edition). On the other hand, most mobile devices are currently WebKit-based and so the rendering quality of mathematical formulas is not very good at the moment. That's why EPUB folks are complaining about the lack of MathML support in browser rendering engines. All the communication around FirefoxOS has been to say that it relies entirely on HTML5 and open standards, as opposed to competitors. So the point is that if Gecko enters the market of mobile devices its good MathML support could be a competitive advantage when presented to EPUB companies that publish mathematical content (education, research , engineering etc) or to users likely to read such ebooks. Of course, this can be generalized to other math applications, not only ebook readers. It seems that Benoit thought that MathML was not used at all and could easily be dropped or replaced. As others have tried to explain that's not true and there are already many concrete projects that have been developed for 15 years, several places where MathML plays a key role and many people keeping asking for MathML support in browsers. Certainly LaTeX can be parsed into a tree for WYSIWYG edition, we can convert ASCIIMath directly to HTML+CSS or accessibility tools could perhaps even read a formula without building a tree at all. But we don't care here since we are talking about the Web and about Gecko-based applications so we want to have an SGML format, a DOM, compatibility with all the HTML5 family and build tools on top of that. There are MathML-based projects to address the needs mentioned in this thread and there are already many pages with MathML content on the Web. So there is no reason to replace MathML by something that will help for the simplest cases (already addressed by existing tools as I said above) but won't work in general and will break all existing MathML contents and projects. The main remaining issue is the lack of browser support so dropping MathML from Gecko would definitely be the wrong choice and a very negative signal to the Web community, especially since one of the argument given is that Gecko should follow what Blink does. Mozilla should be prude to have had volunteers involved in MathML projects during all these years and see that as a benefit. Fortunately, I see that the majority of comments in this thread go in that direction. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Reparenting a XulRunnner window in a Firefox Window
Paul Rouget writes: The Firefox OS Simulator is a XulRunner instance run from Firefox. Two processes, two windows, two different version of gecko. It would be very useful if we could display the simulator as part of Firefox. Inside a tab. Under X11, the XEmbed protocol was designed for this. GTK implements most of this in GtkPlug and GtkSocket, and we use that for plugins. This has some issues with events such as propagating unused scroll events to the host. I think e10s was also using GtkPlug/GtkSocket at one stage, but I'm guessing inter-process layers have superseded this. If you can't load all the content in one process, then using the e10s infrastructure may be a good approach. It would require both processes to be a similar version, I assume, as there wouldn't be stability guarantees in the API. It would also require resources to fix up things that aren't complete or have deteriorated through lack of use, but this is something we want to have working at some stage anyway. I wouldn't recommend using the plugin APIs as our plugin-container is a Gecko and having two different Geckos in the same process is likely to be a problem. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
I used to also really dislike mercurial, but the more I used it the more I started to realize its power. Gps summarized it much better than I ever could have in a blog post last month: http://gregoryszorc.com/blog/2013/05/12/thoughts-on-mercurial-%28and-git%29/ I think mercurial and git are both great, but maybe the easier way to solve this problem is through better mercurial education. Andrew On 05/30/2013 08:56 PM, Johnny Stenback wrote: [TL;DR, I think we need to embrace git in addition to hg for Firefox/Gecko hacking, what do you think?] Hello everyone, The question of whether Mozilla engineering should embrace git usage for Firefox/Gecko development has come up a number of times already in various contexts, and I think it's time to have a serious discussion about this. To me, this question has already been answered. Git is already a reality at Mozilla: 1. Git is in use exclusively for some of our significant projects (B2G, Gaia, Rust, Servo, etc) 2. Lots of Gecko hackers use git for their work on mozilla-central, through various conversions from hg to git. What we're really talking about is whether we should embrace git for Firefox/Gecko development in mozilla-central. IMO, the benefits for embracing git are: * Simplified on-boarding, most of our newcomers come to us knowing git (thanks to Github etc), few know hg. * We already mirror hg to git (in more ways than one), and git is already a necessary part of most of our lives. Having one true git repository would simplify developers' lives. * Developers can use git branches. They just work, and they're a good alternative to patch queues. * Developers can benefit from the better merge algorithms used by git. * Easier collaboration through shared branches. * We could have full history in git, including all of hg and CVS history since 1998! * Git works well with Github, even though we're not switching to Github as the ultimate source of truth (more on that below). Some of the known issues with embracing git are: * Performance of git on windows is sub-optimal (we're already working on it). * Infrastructure changes needed... So in other words, I think there's significant value in embracing git and I think we should make it easier to hack on Gecko/Firefox with git. I see two ways to do that: 1: Embrace both git and hg as a first class DVCS. 2: Switch wholesale to git. Option 1 is where I personally think it's worth investing effort. It means we'd need to set up an atomic bidirectional bridge between hg and git (which I'm told is doable, and there are even commercial solutions for this out there that may solve this for us). Assuming we solve the bridge problem one way or another, it would give us all the benefits listed above, plus developer tool choice, and we could roll this out incrementally w/o the need to change all of our infrastructure at once. I.e. our roll out could look something like this: 1. create a read only, official mozilla-central git mirror 2. add support for pushing to try with git and see the results in tbpl 3. update tbpl to show git revisions in addition to hg revisions 4. move to project branches, then inbound, then m-c, release branches, etc While doing all this, things like build infrastructure and l10n would be largely, if not completely, unaffected. Lots of details TBD there, but the point is we'd have a lot of flexibility in how we approach this while the amount of effort required before our git mirror is functional will be minimal compared to doing a wholesale switch as described below. We would of course need to run high availability servers for both hg and git, and eventually the atomic bidirectional bridge (all of which would likely be on the same hardware). Option 2 is where this discussion started (in the Tuesday meeting a few weeks ago, https://wiki.mozilla.org/Platform/2013-05-07#Should_we_switch_from_hg_to_git.3F). Since then I've had a number of conversations and have been convinced that a wholesale change is the less attractive option. The cost of a wholesale change will be *huge* on the infrastructure end, to a point where we need to question whether the benefits are worth the cost. I have also spoken with other large engineering orgs about git performance limitations, one of which is doing the opposite switch, going from git to hg. While I don't see us hitting those limitations any time soon, I also don't think the risk of hitting those limitations is one we want to take in a wholesale change at this point. One inevitable question that will arise here if we were to switch wholesale over to git is whether we're also considering hosting Firefox/Gecko development on Github, and the answer to that question at this point is no (but we will likely continue to mirror mozilla-central etc to Github). We've been in talks with Github, but we will not get the reliability guarantees we need nor the flexibility we need if we were to
Bugzilla Secure Mail Viewer for GMail now on AMO
Hi folks, I've finally uploaded my secure mail viewer addon to AMO so that updates will work someday soon. Here's the link: https://addons.mozilla.org/en-US/firefox/addon/bugzilla-secure-mail-viewer/ -bent ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/30/2013 7:56 PM, Johnny Stenback wrote: [TL;DR, I think we need to embrace git in addition to hg for Firefox/Gecko hacking, what do you think?] Personally, the more I use git, the more I hate it. But I also see that other people love it... jcramner++ :-) * Developers can use git branches. They just work, and they're a good alternative to patch queues. I can't see how they are a good alternative. With patch queues, I can maintain a complex refactoring in a patch queue containing dozens of smallish patches. In particular, I can easily realize I made a mistake in patch 3 while working on patch 21 and make sure that the fix ends up in patch 3; I don't see how that is easily achievable in git branches. I'm still totally at sea regarding the git development model - hg queues make a lot of sense to me. hg qqueue is very handy and I think covers a lot of what people think they need git for. I'm sure I'm missing things, but I keep accidentally adding files in git patches. * Git works well with Github, even though we're not switching to Github as the ultimate source of truth (more on that below). Outside of the community size, I personally can't see any benefits to Github for Mozilla; if all you want is community outreach, you pretty much just need a read-only git mirror on Github, which we already have. I honestly don't see any serious advantages other than some projects are using it anyways. -- Randell Jesup, Mozilla Corp remove .news for personal email ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Replacing Gecko's URL parser
On 7/1/13 4:51 PM, Mike Hommey wrote: Idempotent: Currently Gecko's parser and the URL Standard's parser are not idempotent. E.g. http://@/mozilla.org/ becomes http:///mozilla.org/ which when parsed becomes http://mozilla.org/ which is somewhat bad for security. My plan is to change the URL Standard to fail parsing empty host names. I'll have to research if there's other cases that are not idempotent. Note that some custom schemes may be relying on empty host names. In Gecko, we have about:foo as well as resource:///foo. In both cases, foo is the path part. I'll bet some extensions implement custom protocol handlers that do the same thing. (In fact, I'm sure of at least one; OverbiteFF uses null host names to denote pseudo URLs internal to the protocol handler.) This might be considered gauche, but probably not an isolated case. Cameron Kaiser ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Kyle Huey's Cycle Collector talk from Taipei meetup
On 6/14/2013 10:02 AM, Chris Pearce wrote: At the recent Web Rendering team meetup in Taipei Kyle gave a talk about how Gecko's cycle collector works, including covering what macros you need to use to ensure the cycle collector knows about your objects. We borrowed a camera from one of the Taipei team and recorded the talk, here it is for your viewing pleasure: http://people.mozilla.com/~cpearce/cycle-collector-khuey-Taipei-23May2013.mp4 There's a WebM transcode available for download here: http://people.mozilla.org/~cpearce/khuey-cycle_collector-Taipei-May2013.webm Chris P. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 7/9/13 3:14 PM, Taras Glek wrote: Conversely, poor code review practices hold us back. Agreed. At the same time, poor patch practices make reviews _much_ harder. We should generally expect good patch practices from established contributors; obviously expecting them from new contributors is not reasonable. I believe that getting fast review turnaround is a collaboration between the reviewer and review requester; it shouldn't be solely on the reviewer to do a fast review no matter how hard the requester makes it. More on this below. a) Realize that reviewing code is more valuable than writing code as it results in higher overall project activity. I agree that it results in higher activity. Whether it results in overall higher value depends on the activity. But as a first cut, I agree with this claim. If you find you can't write code anymore due to prioritizing reviews over coding, grow more reviewers. Easier said than done, of course. ;) Especially for people who get flagged to review abandoned code because there is no one else who is willing to learn it. b) Communicate better. Yes, absolutely. The flip side of this is don't request review from people who are labeled as being away in Bugzilla and expect fast turnaround. I really wish Bugzilla could let me flag myself as not available for reviews when I'm on vacation, say. Expecting people to comment about being on vacation while on vacation is, imo, not reasonable. Does anyone disagree with my 3 points above? Modulo the caveats above, no, but I would like to add some points about what makes a patch easier to review. A lot of our contributors are not very good on these points: * Split mass-changes or mechanical changes into a separate patch from the substantive changes. * If possible, separate patches into conceptually-separate pieces for review purposes (even if you then later collapse them into a single changeset to push). Any time you're requesting review from multiple people on a single huge diff, chance are splitting it might have been a good idea. * Actually address the review comments before requesting another review. It's very common to just ignore (miss?) some of the review comments, which means the reviewer then needs to triple-check that all the things they pointed out got fixed. * When requesting a second review on a patch, provide an interdiff so that the reviewer can just verify that the changes you made match what they asked for. Bugzilla's interdiff is totally unsuitable for this purpose, unfortunately, because it fails so often. * When requesting review or feedback on just part of the patch, make that very clear. All of the above are a consequence of a simple observation: time to review, except for simple mass-changes, is nonlinear in patch size. For a single review pass, I would say it's probably quadratic in most cases. Since the number of review passes is itself typically non-constant in patch size, this means that the common modus operandi of posting a huge diff with a bunch of issues, then addressing some of the review comments and posting another huge diff, etc, takes up a huge amount of reviewer time, most of it basically wasted. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Possibility of replacing SQLite with LMDB from the OpenLDAP project (or replacing the SQLite backend with LMDB)
On Saturday, July 6, 2013 2:26:27 AM UTC-7, Philip Chee wrote: 1. OpenLDAP Lightning Memory-Mapped Database (LMDB): http://symas.com/mdb/ 2. Benchmarks: http://symas.com/mdb/microbench/ 3. SQLite3 ported to use MDB instead of its original Btree code: https://gitorious.org/mdb/sqlightning Fwiw, I currently run my own build of Seamonkey using SQLightning on my laptops. I also have a custom build of XDAndroid running on my HTC TouchPro2 using this as well. There's a noticeable improvement in battery life. LMDB is an ultra-fast, ultra-compact key-value data store developed by Symas for the OpenLDAP Project. It uses memory-mapped files, so it has the read performance of a pure in-memory database while still offering the persistence of standard disk-based databases, and is only limited to the size of the virtual address space, (it is not limited to the size of physical RAM). It looks like LMDB has speed and size advantages on memory constrained systems like Android and B2G. Cons: 1. Still very new. I have no idea what their code coverage looks like. 2. Uses their own OpenLDAP Public Licence (a BSD variant). There are claims that this is OSI-approved, however the OSI site itself doesn't list it. 3. Migration costs (If it ain't broke, don't fix it) not to mention more churn. Phil -- Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com http://flashblock.mozdev.org/ http://xsidebar.mozdev.org Guard us from the she-wolf and the wolf, and guard us from the thief, oh Night, and so be good for us to pass. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Heads up: difference in reference counting between Mozilla and WebKit worlds
Anthony Jones wrote: On 19/06/13 16:02, Robert O'Callahan wrote: I believe that in Webkit you're not supposed to call new directly. Instead you call a static create method that returns the equivalent of already_AddRefed. Do they have a lint checker we can use for that? Surely making the constructors private suffices? -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Reviewer checklist
Some of us over in #mobile brainstormed a list of things that reviewers (and patch-writers) should check for in patches. The idea is to have a handy list of these things that often get forgotten, overlooked, or otherwise slip through. I have put up our list on the MDN wiki at [1] - please feel free to add/update (and most importantly, use!) the page. Some of the items are specific to working on Fennec, so I tried to make that clear by annotating those items. Also note that the list is specifically *not* intended for new contributors at Mozilla, because I expect it to evolve to contain arcane gotchas and stuff (if it hasn't already) that would generally be overwhelming for new contributors. For new contributor patches, I expect that the reviewer/mentor would use the checklist to identify specific problems. [1] https://developer.mozilla.org/en-US/docs/Developer_Guide/Reviewer_Checklist Cheers, kats ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Fix or disable windows desktop b2g builds
On 6/5/2013 3:48 PM, Chris AtLee wrote: Windows desktop b2g builds have been pretty broken for several weeks now, since around May 24 [1]. At this week's engineering and b2g meetings we discussed shutting these off if nobody has a strong reason to keep them around. If you are currently depending on the builds for anything, please let me know. I'm planning to turn these off at the end of the week if they're of little value, and/or we can't get them fixed up. If you're interested in fixing these builds, https://bugzilla.mozilla.org/show_bug.cgi?id=879370 is the bug for you! At the meeting I mentioned that Firefox OS Simulator uses b2g desktop builds. Currently we use self-built ones so this particular bug doesn't affect us but having continuous builds does allow us to spot when breaking changes come along so I'd like to keep them running in some form. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Kyle Huey's Cycle Collector talk from Taipei meetup
At the recent Web Rendering team meetup in Taipei Kyle gave a talk about how Gecko's cycle collector works, including covering what macros you need to use to ensure the cycle collector knows about your objects. We borrowed a camera from one of the Taipei team and recorded the talk, here it is for your viewing pleasure: http://people.mozilla.com/~cpearce/cycle-collector-khuey-Taipei-23May2013.mp4 Chris P. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On Tuesday, July 9, 2013 6:49:20 PM UTC-7, Boris Zbarsky wrote: On 7/9/13 6:11 PM, brendan wrote: I've said this before, not sure it's written in wiki-stone, maybe it should be: if you get a review request, respond same-day either with the actual review, or an ETA or promise to review by a certain date. Again, this is not viable during vacations/weekends... Unless by get you mean get the mail as opposed to have it appear in your queue. Yes, that's what I meant. How else could one respond within a day? People may need reminding or nagging but that should be the exception. It sounds like it isn't exceptional, or rare enough. Indeed. Ok. Does this need to go on wiki.m.o or MDN somewhere (not that it would help those who don't read the updated pages)? Should I use a bigger bully pulpit? /be ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code coverage take 2, and other code hygiene tools
On 6/25/2013 1:19 PM, Dave Townsend wrote: I wish we could get JS code coverage too though. I think JS code coverage is a worthy goal, but it shouldn't block getting automated code coverage. The problem is that getting this to work reliably basically requires changes in the JS engine in the following ways: 1. An instrumentation-based approach basically needs a maintained, tested, quality decompiler for the Reflect.parse AST trees 2. Using the new Debugger APIs requires basically making the debugger usable for xpcshell tests. 3. A third approach I had, using bhackett's pc-counts, would require fixing them in IonMonkey and more well-supporting them. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Making proposal for API exposure official
On 6/21/13 1:45 PM, Andrew Overholt wrote: Back in November, Henri Sivonen started a thread here entitled Proposal: Not shipping prefixed APIs on the release channel [1]. The policy of not shipping moz-prefixed APIs in releases was accepted AFAICT. I've incorporated that policy into a broader one regarding web API exposure. I'd like to see us document this so everyone can easily find our stance in this area, similar to how they can with Blink [2]. I've put a draft here: https://wiki.mozilla.org/User:Overholt/APIExposurePolicy Andrew, can you clarify what only as a part of this product means in the Exceptions section? 1. during the first 12 months of development of new user-facing products, APIs that have not yet been embraced by other vendors or thoroughly discussed by standards bodies may be shipped only as a part of this product, thus clearly indicating their lack of standardization and limiting any market share they may attain chris peterson ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Bugzilla Keyword Standardization Proposal
We in QA have been discussing ways to help improve our workflows and provide some standardization across the teams in how we use Bugzilla. As part of this process we have come up with a proposal to make a small change to the definition of qawanted keyword and the addition of a new keyword qaurgent Please read the proposal linked below and give us your feed back. https://wiki.mozilla.org/QA/qawanted_proposal ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Sandboxed, off-screen pages for thumbnail capture
On 27/06/2013 5:15 PM, Gervase Markham wrote: On 26/06/13 08:45, Mark Hammond wrote: There is evidence users find this troubling - eg, bug 762610 reports that a couple of users wrote to the mozilla webmaster about this. While it may just be a perception, it seems a perception worth managing. And even if someone can't read the exact bank balance figure, they might be able to count the columns, or see the balance is written in red. People can already delete a site using the X button - would making that more prominent help? I'm not sure, but I doubt it - they might not still be sitting at the computer when they notice it (ie, you log out of facebook and move away from the computer. Another user then sits down and opens a new tab - it's too late for you to close the thumbnail) It's not that uncommon for people to borrow a machine that happens to sit in, say, a living-room. If a guest in our house jumps on our communal family machine to (say) log into their bank or quickly check facebook, I'd expect them to be uncomfortable if their bank screen or photos from their facebook feed remain as thumbnails after they are logged out. This seems to be one of those cases where their discomfort is caused by actually having some way of noticing something which has always happened anyway. Their stuff may be still in the cache; and the user could have installed a logger anyway. Sure - as I mentioned it's more about *perception* - but that doesn't make it invalid. Could we consider using blurring, or just using the favicon, instead of this seemingly highly complicated parallel request infrastructure? I'd guess that blurring enough to obscure a red account balance figure or to render a photo from Facebook completely unrecognizable would look fairly ugly. I can't recognise Facebook photos from the Thumbnails as it is... The bank balance is a slightly odd case because it's one where there is information available from colour alone. And it's only a single bit of info at that. I still think we should look at a blurring solution as a much smaller amount of engineering effort for almost the same win. Note there are other ways that thumbnails can be missing - blurring these might reduce that number, but not eliminate them completely. The missing thumbnails I personally see are overwhelmingly *not* due to this, so this wouldn't constitute a win for me (YMMV, etc) Mark ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On Tuesday, 9 July 2013 15:46:31 UTC-4, Boris Zbarsky wrote: On 7/9/13 3:14 PM, Taras Glek wrote: Conversely, poor code review practices hold us back. Agreed. At the same time, poor patch practices make reviews _much_ harder. We should generally expect good patch practices from established contributors; obviously expecting them from new contributors is not reasonable. Why not? If there is a list of good patch practices, there is no reason we can't ask people to complete a checklist and comment on it. It would probably also let us train more reviewers, but letting them know what to start with checking. I believe that getting fast review turnaround is a collaboration between the reviewer and review requester; it shouldn't be solely on the reviewer to do a fast review no matter how hard the requester makes it. +1. It's so easy to review your code should be one of the highest praises for a developer (code quality being equal). -- - Milan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
Weirdly enough, I'm hoping we're using one or the other, and I think git is more promising. Yes, I need to rewrite a bunch of stuff l10n-wise, but still. I actually think that we should aim high. Don't bother about command lines, but what takes us to a system where people can just contribute to Firefox and Gecko on the web. Find the bug, click a button, start editing in your browser, try, review, merge. If we can get that working without involving a single thing but a browser, than we're making a change. git command line vs hg command line doesn't bring that change. The best model people have come up so far is a fork per bug (LegNeato at the time) or per user (github). I see how someone can serve a few 100k of forks (see our bugcount) on git storage, but I don't see that with hg. That to me is the compelling argument. Doing both hg and git sounds like we'll get the worst of both worlds. I'm also advocating for taking hosting dead serious. We're at least struggling with the amount of repos we're serving on hg right now, adding more complexity won't make the systems more stable. Also, Vicent (githubber) has a great talk about the mistakes they did based on git, http://vimeo.com/64716825 (30 mins). They're well past the approaches we're currently at on the hg side. If we don't have to be compatible with hg, we can also rethink the constraints that's putting on us for merge days, etc. I know that option 2 isn't a quick path, and I love the beauty of hg, but we've failed to use that beauty to make the next game changing infrastructure to support contributors, IMHO. I can see us having a better chance with git in the backend. Quick notes on hackability: In terms of stable and reliable hacking, hg and git are on par. Shell out to the command line tools, parse the output. hg being mostly in python is nice for python hacking, but the code paths you're hooking in to are far from stable. I do that extensively, and quite a few ports have been painful. Scripting hg outside of python, well, yes. git has libgit2 now, which is a very basic C impl, and jgit, a java implementation of git. Bindings for libgit2 exist in many languages, but only the ruby and C# are really good. In particular the python binding is far from being pythonic, and from being complete. If it's the right base to create a pythonic api is TBD. Regarding bugzilla integration, there are perl bindings that get modifications. I refuse to know perl good enough to make any statement on the value of the perl bindings, though. Axel On 5/31/13 2:56 AM, Johnny Stenback wrote: [TL;DR, I think we need to embrace git in addition to hg for Firefox/Gecko hacking, what do you think?] Hello everyone, The question of whether Mozilla engineering should embrace git usage for Firefox/Gecko development has come up a number of times already in various contexts, and I think it's time to have a serious discussion about this. To me, this question has already been answered. Git is already a reality at Mozilla: 1. Git is in use exclusively for some of our significant projects (B2G, Gaia, Rust, Servo, etc) 2. Lots of Gecko hackers use git for their work on mozilla-central, through various conversions from hg to git. What we're really talking about is whether we should embrace git for Firefox/Gecko development in mozilla-central. IMO, the benefits for embracing git are: * Simplified on-boarding, most of our newcomers come to us knowing git (thanks to Github etc), few know hg. * We already mirror hg to git (in more ways than one), and git is already a necessary part of most of our lives. Having one true git repository would simplify developers' lives. * Developers can use git branches. They just work, and they're a good alternative to patch queues. * Developers can benefit from the better merge algorithms used by git. * Easier collaboration through shared branches. * We could have full history in git, including all of hg and CVS history since 1998! * Git works well with Github, even though we're not switching to Github as the ultimate source of truth (more on that below). Some of the known issues with embracing git are: * Performance of git on windows is sub-optimal (we're already working on it). * Infrastructure changes needed... So in other words, I think there's significant value in embracing git and I think we should make it easier to hack on Gecko/Firefox with git. I see two ways to do that: 1: Embrace both git and hg as a first class DVCS. 2: Switch wholesale to git. Option 1 is where I personally think it's worth investing effort. It means we'd need to set up an atomic bidirectional bridge between hg and git (which I'm told is doable, and there are even commercial solutions for this out there that may solve this for us). Assuming we solve the bridge problem one way or another, it would give us all the benefits listed above, plus developer
Re: review stop-energy (was 24hour review)
On 09/07/2013 21:29, Chris Peterson wrote: I really wish Bugzilla could let me flag myself as not available for reviews when I'm on vacation, say. Expecting people to comment about being on vacation while on vacation is, imo, not reasonable. I've seen people change their Bugzilla name to include a comment about being on PTO. We should promote this practice. We could also add a Bugzilla feature (just a simple check box or a PTO date range) that appends some vacation message to your Bugzilla name. The problem is, that doesn't work on the patch submission forms. If bugzilla does decide to autocomplete, you can't necessarily see all the info on the name, because the field isn't wide enough. If you don't need/look at the autocomplete, then you get zero indication of the bugzilla name. The only real chance you get to see that name change is if you're on a bug where they have already commented or filed. I wouldn't want something just for PTOs (e.g. meetups can take up review time for projects I'm not meeting up for), but I think it would be good to have the option to provide a warning with a little bit of text *and* have that a) displayed on the submission forms, and b) prompted/confirmed if the user decides to request anyway. Mark. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Replacing Gecko's URL parser
On 01/07/13 19:01 , Boris Zbarsky wrote: On 7/1/13 12:43 PM, Anne van Kesteren wrote: I outlined two issues below, but I'm sure there are more. Another big one I'm aware of is the issue of how to treat '\\' in URLs. -Boris We also have issues with hashes and URI-encoding ( https://bugzilla.mozilla.org/show_bug.cgi?id=483304 ). ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Reparenting a XulRunnner window in a Firefox Window
On 02/07/13 17:34 , Paul Rouget wrote: The Firefox OS Simulator is a XulRunner instance run from Firefox. Two processes, two windows, two different version of gecko. It would be very useful if we could display the simulator as part of Firefox. Inside a tab. With Linux, we could use XReparentWindow [1]. I don't know about Windows and Mac. Any idea if this is something doable with Windows and Mac? Would there be any better approach? [1] http://tronche.com/gui/x/xlib/window-and-session-manager/XReparentWindow.html -- Paul I'm sure this is a dumb question since you didn't even mention it... but why not load whatever the contents of the simulator are in the Firefox tab? As in, why are these separate processes+geckos? ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Sandboxed, off-screen pages for thumbnail capture
On 17/06/13 21:48, Drew Willcoxon wrote: Toolkit already has a thumbnail module, [PageThumbs], but it can only capture thumbnails of open content windows, same as they appear to the user. Windows may contain sensitive data that should not be recorded in an image, however, like bank account numbers and so on, so Firefox uses some [heuristics] to determine when it's safe to capture a given window. Can I challenge an assumption here? I'm not sure I know of a website which puts up sensitive data large enough that it would show up on a thumbnail. And even if it did, it's my browser on my machine. Do we have actual examples of where a thumbnail becomes dangerous? Could we consider using blurring, or just using the favicon, instead of this seemingly highly complicated parallel request infrastructure? Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Changes to file purging during builds
Gregory Szorc wrote: This may result in a multi-second jank at the beginning of the build. As opposed to the multi-minute jank of $(RM) -r _tests ? -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Replacing Gecko's URL parser
Mike Hommey wrote: Note that some custom schemes may be relying on empty host names. In Gecko, we have about:foo as well as resource:///foo. In both cases, foo is the path part. about:foo is actually an nsSimpleURI, not an nsStandardURL, so it just throws when you try to access its host. On the other hand, chrome:// URIs are currently handed off to nsStandardURL too, which means that all chrome package names have to be lower case, since the host part is use for that. -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 2013-07-09 4:48 PM, Boris Zbarsky wrote: On 7/9/13 4:29 PM, Chris Peterson wrote: Bugzilla's interdiff is totally unsuitable for this purpose, unfortunately, because it fails so often. Can we fix Bugzilla's interdiff? Not easily, because it does not have access to the original code... The BMO team is again considering switching to ReviewBoard, which should fix this problem, at least for our most-used repos. When we last evaluated this option, a few years ago, there actually wasn't a BMO team, so we went with the easier option (Splinter), since it is nontrivial to do proper Bugzilla-ReviewBoard integration. We have more resources now, and ReviewBoard seems to answer a lot of user complaints about code reviews, so we're working on this anew. Mark ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/31/13 3:20 PM, Matt Brubeck wrote: blame mobile/android/chrome/content/browser.xul: git 1.015s hg 0.830s Was this a git blame -C (which would be more similar to hg blame), or just a git blame? -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Windows XP virtual memory peak appears to be causing talos timeout errors for dromaeo_css
We have a top orange factor failure which is a talos timeout that only happens on windows XP and predominately on the dromaeo_css test. What happens is we appear to complete the test just fine, but the poller we have on the process used to manage firefox never indicates we have finished. After doing some screenshots and looking at the process list, I haven't found much except that on the failing cases the _Total value for Virtual Bytes Peak is 2GB, and for all the passing instances it is ~1.25GB. Are there other things I should look for, or things I could change to fix this problem? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Block cookies from sites I haven't visited
However, yes, you do want ACCESS_LIMIT_THIRD_PARTY. Cheers, Josh On 06/27/2013 01:17 PM, Josh Matthews wrote: Note that this isn't actually enabled by default on beta or release builds. On 06/27/2013 12:55 PM, Jan Odvarko wrote: Firefox 22 introduced a new cookie feature that allows to block cookies from not-visited sites. Blog post here: https://brendaneich.com/2013/06/the-cookie-clearinghouse/ This change includes also different default value for network.cookie.cookieBehavior preference, which is now: 3 == limit foreign cookies --- I'd like to fix Firebug UI that is available for changing cookie permissions on a site-by-site bases (Firebug always applies on the current page). The question is what is the correct argument to pass to nsIPermissionManager.add() method to limit third party cookies for specific URI. I am using Ci.nsICookiePermission.ACCESS_LIMIT_THIRD_PARTY Is that correct? Honza ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Replacing Gecko's URL parser
On 7/1/13 8:30 PM, Gavin Sharp wrote: .sOn Mon, Jul 1, 2013 at 10:58 AM, Benjamin Smedberg benja...@smedbergs.us wrote: Idempotent: Currently Gecko's parser and the URL Standard's parser are not idempotent. E.g. http://@/mozilla.org/ becomes http:///mozilla.org/ which when parsed becomes http://mozilla.org/ which is somewhat bad for security. My plan is to change the URL Standard to fail parsing empty host names. I'll have to research if there's other cases that are not idempotent. I don't actually know what this means. Are you saying that http://@/mozilla.org/; sometimes resolves to one URI and sometimes another? function makeURI(str) ioSvc.newURI(str, null, null) makeURI(http://@/mozilla.org/;).spec - http:///mozilla.org/ makeURI(http:///mozilla.org/;).spec - http://mozilla.org/ In other words, makeURI(makeURI(str).spec).spec does not always return str. Gavin nitpicking, that's not not idempotent. It's not round-tripping, but it looks like it's idempotent. Axel ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
On 5/31/2013 12:32 PM, Boris Zbarsky wrote: On 5/31/13 3:20 PM, Matt Brubeck wrote: blame mobile/android/chrome/content/browser.xul: git 1.015s hg 0.830s Was this a git blame -C (which would be more similar to hg blame), or just a git blame? Good catch. (Sorry, I missed your messages on IRC warning me about this.) The above numbers were without -C. git blame -C takes about 3.7 seconds on this file. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Sandboxed, off-screen pages for thumbnail capture
On 6/17/2013 9:48 PM, Drew Willcoxon wrote: The desktop Firefox team is building a new Toolkit module that captures thumbnails of off-screen web pages. Critically, we want to avoid capturing any data in these thumbnails that could identify the user. More generally, we're looking for a way to visit pages in a sandboxed manner that does not interact with the user's normal browsing session. Does anyone know of such a way or know how we might change Gecko to support something like that? What about launching/forking a separate process to capture thumbnails for these pages? I don't mean an Electrolysis-style child process that is tightly coupled to the browser, but rather a separate program that does not use the Firefox profile at all. The browser would pass this program a URL and it would just render a page and save the thumbnail. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
IndexedDB Browser addon
Hi folks, Anyone who is using IndexedDB may find their lives slightly improved with this new addon: https://addons.mozilla.org/en-US/firefox/addon/indexeddb-browser/ This will let you inspect all the IndexedDB databases in your current profile as well as import external databases. It's still a very early version so expect some rough edges here and there. Hope it helps! -bent ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Possibility of replacing SQLite with LMDB from the OpenLDAP project (or replacing the SQLite backend with LMDB)
1. OpenLDAP Lightning Memory-Mapped Database (LMDB): http://symas.com/mdb/ 2. Benchmarks: http://symas.com/mdb/microbench/ 3. SQLite3 ported to use MDB instead of its original Btree code: https://gitorious.org/mdb/sqlightning LMDB is an ultra-fast, ultra-compact key-value data store developed by Symas for the OpenLDAP Project. It uses memory-mapped files, so it has the read performance of a pure in-memory database while still offering the persistence of standard disk-based databases, and is only limited to the size of the virtual address space, (it is not limited to the size of physical RAM). It looks like LMDB has speed and size advantages on memory constrained systems like Android and B2G. Cons: 1. Still very new. I have no idea what their code coverage looks like. 2. Uses their own OpenLDAP Public Licence (a BSD variant). There are claims that this is OSI-approved, however the OSI site itself doesn't list it. 3. Migration costs (If it ain't broke, don't fix it) not to mention more churn. Phil -- Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com http://flashblock.mozdev.org/ http://xsidebar.mozdev.org Guard us from the she-wolf and the wolf, and guard us from the thief, oh Night, and so be good for us to pass. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Replacing Gecko's URL parser
On 07/01/2013 08:30 PM, Gavin Sharp wrote: .sOn Mon, Jul 1, 2013 at 10:58 AM, Benjamin Smedberg benja...@smedbergs.us wrote: Idempotent: Currently Gecko's parser and the URL Standard's parser are not idempotent. E.g. http://@/mozilla.org/ becomes http:///mozilla.org/ which when parsed becomes http://mozilla.org/ which is somewhat bad for security. My plan is to change the URL Standard to fail parsing empty host names. I'll have to research if there's other cases that are not idempotent. I don't actually know what this means. Are you saying that http://@/mozilla.org/; sometimes resolves to one URI and sometimes another? function makeURI(str) ioSvc.newURI(str, null, null) makeURI(http://@/mozilla.org/;).spec - http:///mozilla.org/ makeURI(http:///mozilla.org/;).spec - http://mozilla.org/ In other words, makeURI(makeURI(str).spec).spec does not always return str. Actually, the issue is that makeURI(makeURI(str).spec).spec != makeURI(str).spec. HTH Ms2ger ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
I also agree to Randell and Joshua. I've been using both lately and there are just a few things missing in git that I am used to in hg. Mercurial Queues is the most prominent. I am used to switching the order of patches in my queue, which seems like a pain to me in git. Or maybe I haven't quite found out how to do it reliably. Yes, somehow rebasing seems to be the solution here, but with git its quite common that you push your changes to your own remote and then do pull requests. This again will require me to do push -f almost always, since I often change the order of patches. This doesn't sound ideal to me. Philipp ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
this.docshell is null
I receive this error when I click to open firefox. I have googled everything and cannot find a cure/fix for this problem. I don't understand it, I'm not that computer savvy. If you could help I would be most greatful. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Xulrunner app with findbar
Hello, I'm developing application using xulrunner and I need to use find bar (like the one from Firefox). I found something like this: XUL: browser id=content1 flex=1 src=http://www.google.com/; type=content-primary/ findbar id=FindToolbar1 browserid=content1 / JavaScript: var findbar1 = document.getElementById(FindToolbar1); findbar1.open(0); The findbar shown on screen but buttons are disabled and I can't search anything. How to use finbar with browser? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MOZ_NOT_REACHED and JS_NOT_REACHED are now MOZ_ASSUME_UNREACHABLE, and other changes.
On Saturday, June 29, 2013 2:04:58 PM UTC+12, Justin Lebar wrote: tl;dr - Changes from bug 820686: 4. Don't put code after MOZ_CRASH() or MOZ_ASSUME_UNREACHABLE(); it just gives a false sense of security. This appears not to be true. On Try, Windows builds fail if a method ends with MOZ_CRASH() and no return statement with 'not all control paths return a value' wanring, which fails because of warnings as errors. Log: https://tbpl.mozilla.org/php/getParsedLog.php?id=24863612tree=Try Cheers, Nick ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Embracing git usage for Firefox/Gecko development?
Johnny Stenback j...@mozilla.com wrote: [TL;DR, I think we need to embrace git in addition to hg for Firefox/Gecko hacking, what do you think?] 1: Embrace both git and hg as a first class DVCS. 2: Switch wholesale to git. For what it's worth, as someone who is happy to use whatever is thrown at me (I used hg until I worked on B2G for a while and switched to git for a while) I think option 1 is the way to go here. This question has already sucked up way too much time and, assuming the bridge between hg and git works as advertised, letting people simply use their preferred VCS seems like the easiest way to move forward. -- Blake Kaplan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 7/9/13 4:29 PM, Chris Peterson wrote: I've seen people change their Bugzilla name to include a comment about being on PTO. Sure. As a simple example, I did that on June 20th. I got about 20 review requests over the course of the following week and a half, and that's with most of the people who would normally ask me for review _not_ doing so because they knew I was away. Bugzilla's interdiff is totally unsuitable for this purpose, unfortunately, because it fails so often. Can we fix Bugzilla's interdiff? Not easily, because it does not have access to the original code... The most common failure mode here is something like this: 1) Author posts patch. 2) Review happens. 3) Author rebases patch to tip, makes edits to address review comments, re-requests review. At this point, bugzilla interdiff between the new patch and the old one will only work if the rebase was exceedingly trivial. Further, what's really desired in most cases is the diff between the rebased patch and the patch that addresses comments, not the diff between the old patch and the new patch. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 7/9/13 6:11 PM, therealbrendane...@gmail.com wrote: I've said this before, not sure it's written in wiki-stone, maybe it should be: if you get a review request, respond same-day either with the actual review, or an ETA or promise to review by a certain date. Again, this is not viable during vacations/weekends... Unless by get you mean get the mail as opposed to have it appear in your queue. People may need reminding or nagging but that should be the exception. It sounds like it isn't exceptional, or rare enough. Indeed. -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
How to find the differnt controls on a FireFox Browser. ?
So the original question is to be able to find different controls on a XUL application. But to make it simpler, i want to be able to find various controls present on the Firefox browser through my JavaScript. I am trying to use the interfaces being exposed by XULRunner. The thing i have tried thus far. function clicking(txt) { var wm = Components.classes[@mozilla.org/appshell/window-mediator;1] .getService(Components.interfaces.nsIWindowMediator); var enumerator = wm.getEnumerator(navigator:browser); alert(1); while(enumerator.hasMoreElements()) { var win = enumerator.getNext(); var loc = win.document.location; var url = loc.pathname; var link= loc.href; alert(2); alert(url); alert(link); win.close(); } This code when i run on a Firefox embedded in an extension , gives me the reference of the top level xul window which in my case is the browser. What i am typically interested is in enumerating the different controls( forward \backward buttons, menus..etc ) available on the FireFox Browser. Any help ? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [b2g] Proposal for power metering per application framework on FFOS
(Disclaimer: I am the co-founder of the company who developed the below tool) We have a tool called LittleEye (http://www.littleeye.co) which does most of what you propose, but built for Android. Its a desktop tool that automatically injects an agent onto a connected phone. It can be used to monitor the power consumption of an individual application by modelling its power consumption based on the various hardware components its using (like CPU, display, GPS etc). If you are looking for an open source app which does something similar, checkout the research tool http://powertutor.org. It also does modelling of app power based on its hardware consumption. In addition to Wifi, 3G/4G power is also useful and there it has an additional complication of having tail states which are big concern with inefficient apps keeping the system awake all the time by doing period data transfers. Display power is also very useful (esp on OLED screens), and one can do that by sampling the screen periodically and looking at the various pixel values. Brightness of the screen is also a very useful factor. GPS is tricky because multiple apps could request for the location. In Android we differentiate between an app requesting for passive updates and active updates and if its in the foreground or background. The power is distributed based on all these factors. We are not too familiar with the FFOS, but would be happy to help if needed. :-) On Tuesday, June 11, 2013 6:59:03 AM UTC+5:30, Jonas Sicking wrote: On Sun, Jun 9, 2013 at 8:42 PM, Vincent Chang(張藝馨) changyihsincom wrote: Hi there, As you may know that battery life has always been one of the important concerns for users on mobile devices. Our intern student Gavin Lai and I propose APIs and framework on FFOS to help to gather power consumption statistic information. Our goal is to provide power consumption information from gecko. The application can use these information to calculate the power consumption per app. We classify the power consumption into two groups: Application and System. For each application, there are several parts are included as follows: 1. CPU 2. WiFi Active (WiFi traffic) 3. Radio Active (Radio traffic) 4. Wakelock Some events or components are hard to distinguish into each app. We classify this kind of power consumption into System. 1. Screen 2. WiFi On 3. WiFi Scan 4. Idle 5. Bluetooth On 6. Audio 7. Video 8. GPS (Not sure)9. Bluetooth Active (Bluetooth traffic) As many of these as possible I think we should try to attribute to each app whenever possible. GPS in particular should be very possible to attribute to the particular app that is using it. We know whenever an app is requesting to get information from the GPS. We should divide GPS power consumption between any app that has requested location information within the last X seconds. Same thing should be possible to do for bluetooth I would imagine? In general I think we should work towards making the second list as short as we can. But it's something that we can do over time. But anything that turns out to be a high battery drain, we should prioritize to split it out per-app. Which brings up the point that it'd be great to add telemetry probes for each component. That way we will during dogfooding see which components use up a lot of battery. That allows us both to prioritize making them use less battery, when possible, and to prioritize logging the data per-app. / Jonas ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Possibility of replacing SQLite with LMDB from the OpenLDAP project (or replacing the SQLite backend with LMDB)
On 06/07/2013 11:26, Philip Chee wrote: LMDB is an ultra-fast, ultra-compact key-value data store developed by Symas for the OpenLDAP Project. It uses memory-mapped files, so it has the read performance of a pure in-memory database while still offering the persistence of standard disk-based databases, and is only limited to the size of the virtual address space, (it is not limited to the size of physical RAM). It looks like LMDB has speed and size advantages on memory constrained systems like Android and B2G. Ehr, I wrongly replied instead of following-up, let me paste again: As for many other dbms around, comparisons are pretty much hard, just relying on microbenchmarking doesn't help much. What's best, LMDB, levelDB, kyotoCabinet, Sqlite4? It's hard to tell just by looking at these graphs, you'd need measurements done directly on your most compelling use-cases. Just getting those measures is a project by itself. Then there's the cost of conversion and added code size, for which Sqlite4 may be a winner, considered we'd need less changes and we'd likely have more code reuse from Sqlite3. Then there's memshrink, I couldn't find the maximum and average used memory in the reported benchmarks, we are currently using a max of 2MB per DB connection, would it be comparable? I finally wonder how they solved the problem of mmap corruption, also Sqlite can use mmap IO, and it may be up to ten times faster, but has no protection for memory corruption due to stray pointers or buffer overflows in the application. So, it's definitely an option to consider when we decide to start working on a new generic storage, but we need far more details to be able to properly evaluate. Fwiw, Sqlite4 will support swappable engines, so you can use lmdb as the Sqlite4 engine, and a backend is already under development from what I read on the lmdb page. -m ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
running tests in HiDPI mode on the build machines
Can you explain what would need to be done for Android to get into this mode? It might be difficult to make this work with our current solution for automated tests. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 10/07/13 15:09, smaug wrote: One thing, which has often brought up, would be to have other automatic coding style checker than just Ms2ger. At least in the DOM land we try to follow the coding style rules rather strictly and it would ease reviewers work if there was some good tool which does the coding style check automatically. In Gaia, we have a Git pre-commit hook that runs our linter for every commit (if the committer has installed the linter). You can also see that we only run it on specific directories. (And in case you know what you're doing, you can bypass it with |git commit --no-verify| ) https://github.com/mozilla-b2g/gaia/blob/master/tools/pre-commit ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 07/09/2013 03:14 PM, Taras Glek wrote: Hi, Browsers are a competitive field. We need to move faster. Eliminating review lag is an obvious step in the right direction. I believe good code review is essential for shipping a good browser. Conversely, poor code review practices hold us back. I am really frustrated with how many excellent developers are held back by poor review practices. IMHO the single worst practice is not communicating with patch author as to when the patch will get reviewed. Anecdotal evidence suggests that we do best at reviews where the patch in question lines up with reviewer's current project The worst thing that happens there is rubber-stamping (eg reviewing non-trivial 60KB+ patches in 30min). Anecdotally, latency correlates inversely with how close the reviewer is to patch author, eg: project same team same part of organization org-wide random community member I think we need change a couple things*: a) Realize that reviewing code is more valuable than writing code as it results in higher overall project activity. If you find you can't write code anymore due to prioritizing reviews over coding, grow more reviewers. b) Communicate better. If you are an active contributor, you should not leave r? patches sitting in your queue without feedback. I will review this next week because I'm (busy reviewing ___ this week|away at conference). I think bugzilla could use some improvements there. If you think a patch is lower priority than your other work communicate that. c) If you think saying nothing is better than admitting than you wont get to the patch for a while**, that's passive aggressiveness (https://en.wikipedia.org/wiki/Passive-aggressive_behavior). This is not a good way to build a happy coding community. Managers, look for instances of this on your team. In my experience the main cause of review stop-energy is lack of will to inconvenience own projects by switching gears to go through another person's work. I've seen too many amazing, productive people get discouraged by poor review throughput. Most of these people would rather not create even more tension by complaining about this...that's what managers are for :) Does anyone disagree with my 3 points above? Can we make some derivative of these rules into a formal policy(some sort of code of developer conduct)? Taras * There obvious exceptions to above guidelines (eg deadlines). ** Holding back bad code is a feature, not a bug, do it politely. In general, +1 to all the 3 points. For b) it would be nice if bugzilla would let also the patch author to indicate if some patch isn't anything urgent. (or perhaps the last sentence of b) means that. Not sure whether 'you' refers to the reviewer or the patch author :) ) One thing, which has often brought up, would be to have other automatic coding style checker than just Ms2ger. At least in the DOM land we try to follow the coding style rules rather strictly and it would ease reviewers work if there was some good tool which does the coding style check automatically. Curious, do we have some recent statistics how long it takes to get a review? Hopefully per module. On 07/09/2013 03:46 PM, Boris Zbarsky wrote: * Split mass-changes or mechanical changes into a separate patch from the substantive changes. * If possible, separate patches into conceptually-separate pieces for review purposes (even if you then later collapse them into a single changeset to push). Any time you're requesting review from multiple people on a single huge diff, chance are splitting it might have been a good idea. ... Splitting patches is usually useful, but having a patch containing all the changes can be also good. If you have a set of 20-30 patches, but not a patch which contains all the changes, it is often hard to see the big picture. Again, perhaps some tool could help here. Something which can generate the full patch from the smaller ones. -Olli ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Disabling XUL -moz-inline-stack/-moz-stack on the Web?
On 6/13/13 11:56 PM, Robert O'Callahan wrote: Bug 875060 made me wonder whether we should disable XUL 'display' values on the Web, perhaps starting with -moz-stack and -moz-inline-stack. They do very little that can't be done with absolute positioning. Perhaps we would leave XUL 'display' values enabled for pages where remote XUL has been whitelisted. What do people think? I think it's a great idea, just like I did a week ago when I filed https://bugzilla.mozilla.org/show_bug.cgi?id=879275 ;) I was more worried about -moz-box, since people are more likely to (ab)use it, but you're right that the compat issues are likely to be less with the stack display types. The point about allowing them if remote XUL was whitelisted is a good one, though; we probably should do that -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 7/10/13 1:58 PM, Mark Côté wrote: The BMO team is again considering switching to ReviewBoard, which should fix this problem How does ReviewBoard address this? Again, what we have in the bug is diff 1 against changeset A and diff 2 against changeset B that incorporates the review changes. What the reviewer would like to see is a diff from 1 rebased against B to 2. I guess that's possible if the system tries to automatically rebase diff 1 against changeset B, but if the automatic rebase fails you still fail... -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On 7/10/13 12:56 PM, msrecko...@mozilla.com wrote: Why not? Because submitting a first patch is scary enough as it is that we should try to minimize the roadblocks involved in it. This is also why the reviewer in cases like that should handle setting the checkin-needed keyword (or just land the patch), push it to try as needed, etc. If there is a list of good patch practices, there is no reason we can't ask people to complete a checklist and comment on it. I think that's a fine thing to do for people who are planning to do more than one patch, and is definitely something we should do when someone starts contributing somewhat consistently. But I'm not convinced that it's reasonable to require it for a first patch... -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Watching pages on MDN
On 6/14/13 10:19 AM, Eric Shepherd wrote: We have just thrown the switch, and if you're logged into MDN, you should now have a new Email me article changes button next to the Edit button near the top of the page. Thank you for making this happen! -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Watching pages on MDN
Dear devs and writers, We have just thrown the switch, and if you're logged into MDN, you should now have a new Email me article changes button next to the Edit button near the top of the page. Click this button, and you'll start receiving emails when that page is changed. The emails will tell you which article changed, who changed it, and include a diff that shows what changed. We don't yet have support for watching entire subtrees, so you'll have to do this on a page-by-page basis, but it's a good start. Enjoy! -- Eric Shepherd Developer Documentation Lead Mozilla Blog: http://www.bitstampede.com/ Twitter: @sheppy ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Making proposal for API exposure official
On 6/24/13 5:49 PM, Ehsan Akhgari wrote: What about changes made in order to improve compliance with a spec not developed by Mozilla? This is a tough call. My experience is that most specs out there are buggy in their use of WebIDL and in their general API design, so such a change would need someone who understands the various pieces involved to have reviewed the spec first... -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code coverage take 2, and other code hygiene tools
On 06/25/2013 03:50 AM, Clint Talbert wrote: So, the key things I want to know: * Will you support code coverage? Would it be useful to your work to have a regularly scheduled code coverage build test run? I have looked at decoder's older code coverage data [1] before, and found it very useful. I strongly support getting it run automatically. One feature that would it make even more useful to me is code coverage for generated code, and the WebIDL bindings in particular. (When I last talked to decoder about this, it only covered files in the source tree; I don't know if that has changed since.) * Would you want to additionally consider using something like JS-Lint for our codebase? I don't want to derail this thread, but AIUI, JSLint enforces Douglas Crockford's style, which is not necessarily something anyone else wants to use. HTH Ms2ger [1] http://people.mozilla.org/~choller/firefox/coverage/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Sandboxed, off-screen pages for thumbnail capture
On 25/06/2013 5:29 PM, Gervase Markham wrote: On 17/06/13 21:48, Drew Willcoxon wrote: Toolkit already has a thumbnail module, [PageThumbs], but it can only capture thumbnails of open content windows, same as they appear to the user. Windows may contain sensitive data that should not be recorded in an image, however, like bank account numbers and so on, so Firefox uses some [heuristics] to determine when it's safe to capture a given window. Can I challenge an assumption here? I'm not sure I know of a website which puts up sensitive data large enough that it would show up on a thumbnail. There is evidence users find this troubling - eg, bug 762610 reports that a couple of users wrote to the mozilla webmaster about this. While it may just be a perception, it seems a perception worth managing. And even if someone can't read the exact bank balance figure, they might be able to count the columns, or see the balance is written in red. And even if it did, it's my browser on my machine. It's not that uncommon for people to borrow a machine that happens to sit in, say, a living-room. If a guest in our house jumps on our communal family machine to (say) log into their bank or quickly check facebook, I'd expect them to be uncomfortable if their bank screen or photos from their facebook feed remain as thumbnails after they are logged out. Do we have actual examples of where a thumbnail becomes dangerous? dangerous seems an unreasonably high bar for this. Making our users uncomfortable would seem a reasonable trigger. Could we consider using blurring, or just using the favicon, instead of this seemingly highly complicated parallel request infrastructure? I'd guess that blurring enough to obscure a red account balance figure or to render a photo from Facebook completely unrecognizable would look fairly ugly. Ditto for scaling up a favicon - although I must admit I've never tried either of these options. Hopefully someone from ux could weigh in here... Mark ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Building with a pre-built libxul.so
I recently discovered that I can build Firefox by downloading the XULRunner SDK and using it instead of compiling everything. Clobber to finished in under a minute. Great! (Ignoring the not-insignificant download time.) If I build this way I can hack the Firefox-specific UI. Not much else, including running tests (that I've figure out anyway). So my question is, if I want to hack something like the Add-on Manager (I do), do I need to build everything? Why? Is there any way that I can use the pre-compiled bits from the SDK, and build the rest? If not, can we make one? It seems wasteful to spend two hours (in my case) compiling a massive chunk of code I'm not interested in (when there is a compiled version I can download), so that I can get to a state where I can work on the bits that I am interested in. Sorry if this appears selfish, but I think it'd be a massive help to those of us who are only interested in the front-end code. GL ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Sandboxed, off-screen pages for thumbnail capture
On 26/06/13 08:45, Mark Hammond wrote: There is evidence users find this troubling - eg, bug 762610 reports that a couple of users wrote to the mozilla webmaster about this. While it may just be a perception, it seems a perception worth managing. And even if someone can't read the exact bank balance figure, they might be able to count the columns, or see the balance is written in red. People can already delete a site using the X button - would making that more prominent help? It's not that uncommon for people to borrow a machine that happens to sit in, say, a living-room. If a guest in our house jumps on our communal family machine to (say) log into their bank or quickly check facebook, I'd expect them to be uncomfortable if their bank screen or photos from their facebook feed remain as thumbnails after they are logged out. This seems to be one of those cases where their discomfort is caused by actually having some way of noticing something which has always happened anyway. Their stuff may be still in the cache; and the user could have installed a logger anyway. Could we consider using blurring, or just using the favicon, instead of this seemingly highly complicated parallel request infrastructure? I'd guess that blurring enough to obscure a red account balance figure or to render a photo from Facebook completely unrecognizable would look fairly ugly. I can't recognise Facebook photos from the Thumbnails as it is... The bank balance is a slightly odd case because it's one where there is information available from colour alone. And it's only a single bit of info at that. I still think we should look at a blurring solution as a much smaller amount of engineering effort for almost the same win. Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform