Re: PSA: Staying in control of Phabricator comments, and avoiding a footgun
Karl Tomlinson writes: > "Show Older Inlines" "Disabled" sounds like something to avoid > like the plague. I wonder whether global configuration to remove > that setting is available ... My comment was rash. The setting can actually be very useful (at least temporarily) because, when disabled, the comments are still in the history but always link to the code where the comment was made (instead of using heuristics to move the comment). However, some other mechanism is required to keep track of which comments are addressed. "When a revision is updated, Phabricator attempts to bring inline comments on the older version forward to the new changes. You can disable this behavior if you prefer comments stay anchored in one place." My personal preference would probably be that the setting applied only to the history, making it orthogonal to the "Hide Older Inlines" option in the transient banner. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: PSA: Staying in control of Phabricator comments, and avoiding a footgun
Thank you for highlighting those, Jonathan. "Show Older Inlines" "Disabled" sounds like something to avoid like the plague. I wonder whether global configuration to remove that setting is available ... I find the "Collapse" operation on inline comments useful to track remaining relevant comments, especially given the reviewer has little control over the "Done" flag. "Collapse" state is maintained over page loads. I assume it is tracked per login. (It is unset and not available when not logged in, at least.) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Intent to ship: AudioWorklet
As of this week I intend to turn on AudioWorklet by default for all platforms. It has been developed behind the "dom.audioworklet.enabled" and "dom.worklet.enabled" preferences. Status in other browsers is: Chrome, Opera: shipping. Edge, Webkit: not shipping, intentions unknown. Product: Adam Stevenson Bug to turn on by default: https://bugzilla.mozilla.org/show_bug.cgi?id=1616725 This feature was previously discussed in this "Intent to implement" thread: https://groups.google.com/forum/#!msg/mozilla.dev.platform/EtjfqRd9FI0/30DfQ3c6DgAJ =Target release This feature was enabled by default on Nightly from the 20200331093527 build. Nightly is now 77, but there is demand for this feature including some WFH benefits from one service provider, and so we are looking to uplift the pref switch to 76. =AudioNode lifetime interoperability The issue mentioned in the intent to implement post regarding observability of garbage collection through [AudioNode lifetimes] and interoperability with Chrome has been satisfactorily resolved by hiding the observability of AudioNode lifetimes. Chrome has modified their implementation in line with this resolution. =import Gecko's AudioWorklet implementation does not yet include support for static import of nested scripts. The lack of support shows up as rejection of the promise returned from Worklet#addModule(). (Worklets simply do not support dynamic import.) =Stability The specification is expected to be stable except for the nature of [parameters] passed to the client process() callback. The `inputs` and `outputs` parameters each have a `length` attribute and an indexed property getter, and the `parameters` parameter has a named property getter. This much is stable. There is ongoing discussion regarding whether the underlying JS Objects are frozen and whether Float32Array property values can be transferred via MessagePort#postMessage(). [AudioNode lifetimes] https://github.com/WebAudio/web-audio-api/issues/1471 [parameters] https://github.com/WebAudio/web-audio-api/issues/1933 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
NotNull and pointer parameters that may or may not be null
Google style requires pointers for parameters that may be mutated by the callee, which provides that the potential mutation is visible at the call site. Pointers to `const` types are permitted, but recommended when "input is somehow treated differently" [1], such as when a null value may be passed. Comments at function declarations should mention "Whether any of the arguments can be a null pointer." [2] Some Gecko code has instead sometimes used a different reference/pointer convention to identify parameters which can be null. With Google style, there is the question of whether to use a different mechanism to indicate whether or not pointer parameter may be null. If a method signature wants a pointer to an object of a particular type, then I would usually expect to need to provide a pointer to an object, unless documented otherwise. I wonder whether in the general case, possibly-null pointers are more the exception than the rule? Does use of NotNull add much value if most pointer parameters are assumed not null, but not explicitly wrapped? I wonder whether the reverse kind of wrapping would be more useful? A MaybeNull type was previously contemplated but considered more complex and less obviously useful [3]. A NotNull wrapping of the parameter would catch nulls when set in debug builds. A MaybeNull wrapping could catch nulls when dereferenced. Assertions don't really seem necessary to me because we are almost certain to find out anyway if code that expects non-null receives a null pointer. Behavior is actually undefined, but I don't recall ever seeing a small null offset dereference bug rated worse than sec-low. The primary benefit of a wrapping would be to annotate expectations and perhaps even require more though about what those are. mozilla::NotNull was introduced in 2016 [4] but hasn't seen much adoption. I found very few header files that use NotNull consistently, but looked at the ones with most usage. At the time that NotNull usage was introduced to image/DecoderFactory.h [5], it was applied to eight parameters. Two pointer parameters did not get wrapped with NotNull. Of those the `const char* aMimeType` parameter on the public method was assumed non-null by the implementation. The other parameter was for a private method that permits `RasterImage* aImage` to be null. Since then, two additional `IDecodingTask** aOutTask` pointer parameters have been added, each of which is assumed non-null. intl/Encoding.h [6] has seven uses of NotNull in return types. In the C++ API exposed, I found five instances of pointers in return values where NotNull was not applied: each used nullptr as a sentinel. It seems that NotNull is particularly useful for Encoding because sometimes there may be a sentinel value and other times not. A MaybeNull alternative would be at least as useful in this case. I'm inclined to skip NotNull unless there is a special reason to use it. Anyone have reason to feel differently? [1] https://google.github.io/styleguide/cppguide.html#Reference_Arguments [2] https://google.github.io/styleguide/cppguide.html#Function_Comments [3] https://groups.google.com/d/msg/mozilla.dev.platform/_jfnwDvcvN4/rl6c3TcFAQAJ [4] https://groups.google.com/d/msg/mozilla.dev.platform/6M_afibWhBA/MCmrYktaBgAJ [5] https://searchfox.org/mozilla-central/rev/f6528fc8520d47e507877da3dda798ab57385be2/image/DecoderFactory.h#93 [6] https://searchfox.org/mozilla-central/rev/4fbcfe6fecf75d7b828fbebda79a31385f7eab5f/intl/Encoding.h ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
non-const reference parameters in new and older code
https://google.github.io/styleguide/cppguide.html#Reference_Arguments has a simple rule to determine when reference parameters are permitted: "Within function parameter lists all references must be const." This is consistent with Mozilla's previous coding style: "Use pointers, instead of references for function out parameters, even for primitive types." [1] However, across Gecko there are different interpretations of what "out" parameter means. The Google style considers a parameter to be an out parameter if any of its state may be mutated by the callee. In some parts of Gecko, a parameter is considered an out parameter only if the callee might make wholesale changes to the state of parameter. Well before the announcement to switch to Google style, this interpretation was discussed in 2017 [2], with subsequent discussion around which types were suitable as non-const reference parameters. I'm asking how should existing surrounding code with some different conventions affect when is it acceptable to follow Google style for reference or pointer parameters? The announcement of changes to Mozilla style [3] says > Google C++ style will be permitted but not initially enforced, > and consistency with surrounding code should take precedence. and > we have decided that by allowing the usage of the Google C++ > Style in new areas, we are taking an incremental step towards > that ultimate goal [Google C++ Style)] by allowing (not > requiring) people to write code immediately after our switch > that is consistent with our newly adopted coding style [...] > The intention here is not that people intermix Google C++ Style > and Mozilla Style code on a per function or even a per file > basis but rather that they be free to do so when starting new, > self-contained work, or to convert an entire module. Parameters cross module boundaries. For example, WebIDL bindings use non-const references and pointers to distinguish between C++ representations of non-nullable and nullable interface and callback types [4]. Classes of many modules need to break from Google style to provide appropriate calling conventions to WebIDL bindings. Should this mean that such classes need to be consistent with the WebIDL binding convention when declaring other methods just because they need to implement some methods for WebIDL bindings? If conventions should traverse modules boundaries in this way, then very little new code would follow the new style. Is that the expectation or is it preferable to aim to follow the new style where possible? A recent post [5] appears to propose a slightly different interpretation: > In other words, we should default to using Google C++ style when > doing so would not be massively more disruptive or inconsistent > than the alternative. So we're not boiling the ocean over mFoo, > but preferring the explicit integer types and citing the Google > style guide is a reasonable thing to do. Declaring new methods to follow Google parameter conventions is not in general massively more disruptive or even inconsistent than the alternative. Even when working with similar types, the mix of pointer/reference usage and the existence of other situations where references are not suitable [6] means that conversions between references and pointers are already regularly required. I can imagine reasons why the reference/pointer convention between nullable and non-nullable types is something that may have been particularly useful for implementing WebIDL interfaces but that distinction is not necessarily the best reference/pointer convention in general, and so choices made for WebIDL bindings should not necessarily influence other code. Consistency is good, but that argument can be applied in either direction. There has been similar conflict between reference parameter conventions in Blink. Google and Chromium styles forbid mutable reference parameters. WebKit introduced mutable reference parameters to help in wiping out phantom null checks [7] but this change was not completed (in Blink at least). Code that has moved from Chromium into Blink has not changed style [8] and blink/core/layout/ng has chosen Google style rather than legacy WebKit style so as to highlight side effects in callees [9]. People using mutable reference parameters are typically pleased with their choice, and people using only const reference parameters are pleased with their choice [10]. Blink style was recently (March 20) changed to accept either style, to reflect reality [11]. No single choice between the conventions has yet been made for Blink. When reading over a call site and surrounding code, I find knowledge of mutability of arguments necessary first for high level understanding. Missing a mutation often means completely misunderstanding. Whether arguments may be null or not is usually a detail secondary in understanding of the function of the code. Disclaimer: I was asked by a reviewer to post about this topic, but this post has my own
precedence of Mozilla, Google, and clang-format styles for C++ code
Near the top of https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style there is > New code should try to conform to these standards, so it is as > easy to maintain as existing code. [...] > > This article is particularly for those new to the Mozilla > codebase [...]" Before requesting a review, please read over > this document, making sure that your code conforms to > recommendations." > > Firefox code base uses the Google Coding style for C++ code [2] On reading over the document, the reader finds a number of additional guidelines, some of which are complementary or orthogonal to Google style and some of which are contradictory. Can we clarify which documents have precedence so that we can spell this out clearly on this page, please? "we would like to convert entirely to this [Google C++] coding style -- perhaps with some exceptions for Mozilla-specific constraints". [1] Would the Mozilla coding style be the appropriate place to list such exceptions? The existing recording of a recent resolution [3] on this page seems helpful to me. The reason to think this page might not be the appropriate place is because [1] > we will retire the Mozilla C/C++ coding style. The intention > behind retiring the current Mozilla style is to just stop > maintaining it as an active coding style going forward, but the > documentation for this coding style will be kept intact. What is the motivation for keeping intact the legacy coding style? Is it to guide those contributing in code that has not yet moved to Google style? I wonder how useful that would be given "we now have significant inconsistencies in the style of our codebase" [1]? If it is useful to keep intact the documentation of legacy style, can we distinguish which are the guidelines that should apply only to older code? "we are not aiming to enforce any naming conventions at this time" [1] but both the Google and Mozilla style guides have (conflicting) naming conventions. Can we clarify which, if any, naming guidelines apply to new and older code? The other piece of the equation here is [1] > our precise coding style will be the result of formatting our > tree with clang-format, so by definition our code, when > formatted with clang-format, will be following our new coding > style. clang-format doesn't provide compliance with all guidelines in the Google style, but it does have precedence, because the only option to override is `// clang-format off`. clang-format has its own preferences for formatting, many of which are not documented in the Google style guide. Would it be appropriate for the Mozilla coding style page to indicate that `./mach clang-format` defines the formatting of C++ code and the Google style guide provides other guidelines, which should be followed in new code unless there is an exception listed? https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code is another page with more guidelines. I assume we would expect these to be consistent with those on the Mozilla coding style page. I assume "Using C++ in Mozilla code" would take precedence over Google style? [1] https://groups.google.com/forum/#!msg/mozilla.dev.platform/VCLB1Lz4yXc/dNoNVzmlCQAJ [2] https://google.github.io/styleguide/cppguide.html [3] https://groups.google.com/forum/#!topic/mozilla.dev.platform/0NFBXe6VnfM ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: C++ method definition comments
Ehsan Akhgari writes: > What tool do you use which has difficulty showing function names in diffs > right now? It seems to work fine for me both in git and hgweb... It's cases like these that are truncated earlier due to putting the return type before the function name: % hg export 9f7b93f5c4f8 | sed -n 275p @@ -891,19 +886,17 @@ nsresult nsMixedContentBlocker::ShouldLo % hg export b8d2dfdfc324 | sed -n 2094p @@ -1717,16 +1860,23 @@ already_AddRefed nsFactoryEn https://hg.mozilla.org/mozilla-central/rev/9f7b93f5c4f8#l5.119 https://hg.mozilla.org/mozilla-central/rev/b8d2dfdfc324#l7.838 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: C++ method definition comments
Ehsan Akhgari writes: > On Mon, Jan 28, 2019 at 2:58 PM Jeff Gilbert wrote: > >> I would much rather revert to: >> /*static*/ void >> Foo::Bar() >> >> The Foo::Bar is the most relevant part of that whole expression, which >> makes it nice to keep up against the start of the line. >> > > The clang-format option which allows formatting the way you are suggesting, > AlwaysBreakAfterDefinitionReturnType, is deprecated, and is likely to be > removed from a future version of clang-format, so there is no sustainable > way for us to adopt this suggestion. Where there's a will there's often a way. e.g. /*static*/ void //(clang-format line-break) Foo::Bar() { I do like being able to see function names in diff output, but I'm not so keen on having to put //cflb at the beginning of every function. This feels too much like working against the decision to follow Google style. With so much code using Google style, I guess there must be tools to show useful information in diff output, or at least there will be soon... ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: C++ method definition comments
Ehsan Akhgari writes: > On Mon, Jan 28, 2019 at 6:27 PM Ryan Hunt wrote: > >> [...] >> >> So for converting from C-style to C++-style, that would be: >> >> /* static */ void Foo::Bar() { >> ... >> } >> >> // static >> void Foo::Bar() { >> ... >> } >> >> [...] >> >> My one concern would be the presence of other C++-style >> comments before the method definition. For example [1]. > > [...] How about detecting those cases and inserting a newline > between the comments on the line before, for extra clarity? > >> [...] >> >> [1] >> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1023 >> >> ‐‐‐ Original Message ‐‐‐ >> On Monday, January 28, 2019 12:51 PM, Ehsan Akhgari < >> ehsan.akhg...@gmail.com> wrote: >> >> [...] >> >> The path to least resistance for addressing this problem may be to convert >> these into C++-style comments and therefore moving them into their own >> lines. Would you be OK with that? I haven't noticed clang-format enforcing its own opinions on comments when they already follow Google style. In my experiments clang-format is accepting this: // Make this Foo Bar. /* static */ void Foo::Bar() { ... } The /* */ style comment provides a clear separation from any other comment on the previous line, without the need for an extra blank-line. "don't use blank lines when you don't have to." ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Plan for Sunsetting MozReview
Martin Thomson writes: > On Wed, Sep 5, 2018 at 4:42 PM Mark Banner wrote: >> A couple of things that may help with the scrolling & finding, that >> people may or may not have found yet... > > The keyboard shortcuts are more accessible (type ? to see the list > [1]), though in my experience they interact poorly with concurrent > mouse actions. One one or the other exclusively for best results. Interesting. n and p will include "Done" inline comments but the x / y comments button skips "Done" inlines. But please don't depend on either of these to find all issues that need to be addressed. Neither will include inline comments made on previous diffs that have not been ported to ghost comments on the latest diff. I wonder what the "x" is in x / y. (I was guessing it would show the current inline number but it doesn't.) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Plan for Sunsetting MozReview
Mark Côté writes: > On August 20, we will remove public access to MozReview and archive > patches. Every landed, in-progress, and abandoned patch will be downloaded > from MozReview and stored in an S3 bucket. The “stub attachments” in > Bugzilla that currently redirect to MozReview will be updated to link to > the appropriate S3 bucket. Review flags and bug comments will be preserved. > We will also be writing a simple redirection service to map specific > MozReview reviews to associated BMO comments, review requests to associated > bugs, and review-request diffs to the appropriate S3 buckets. Could the stub attachments and review-request diffs (but not interdiffs of course) redirect to the commits in the review repo (at https://reviewboard-hg.mozilla.org/gecko/ or wherever this ends up), please? The parent revision is important to the meaning of a patch, and it is not clear that navigation from the S3 buckets to the mercurial revisions in the review repo will be easy. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: Scrollbar color properties
Xidorn Quan writes: >> Would the color for the auto property be derived in such as way as >> to hide the thumb or to make it constrast? > > No, currently we don't derive them from each other. We simply use some > fallback color in that case. How is the fallback color chosen? > It is not impossible to derive them, but I would expect authors to specify > them together. We know from experience that authors will not in general specify complete pairs of colors intended to contrast. We need to ensure that if only one color is specified, then the scrollbar is either consistently usable or consistently not. We shouldn't add a feature that can lead to pages testing as reasonable on some systems, but just causes them to render unusable on others. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: Scrollbar color properties
Thanks very much for answering some of my questions for me. Xidorn Quan writes: > On Wed, Jul 25, 2018, at 6:29 PM, Karl Tomlinson wrote: >> Is there a plan to avoid the contrast problems we have mixing >> document colors with system colors in other widgets? >> >> e.g. If one scrollbar color is specified by the document, then >> what ensures that other parts of the scrollbar are visually >> distinct? >> >> Does the computed value of the other scrollbar color change so >> that it contrasts with the specified color? > > There are only two scrollbar color properties. If a platform has > other parts which need different colors, we would derive such > colors from the given one based on the native scheme and > constrast. Does the second sentence also apply when only one of the two properties is auto? Would the color for the auto property be derived in such as way as to hide the thumb or to make it constrast? >> I see some such code in GetScrollbarArrowColor(), but I haven't >> found something similar for track and thumb. Does something >> ensure the thumb will be visible? > > Both track and thumb are specified by the author, so no, there > is nothing ensures that thumb will be visible if the author > specifically want to hide it. I can understand that if the computed values of track and thumb match and are not auto, then a scrollbar might be hidden. I'm not clear on whether it will actually be hidden because I'm not clear on how much variation (e.g. shading) the implementation is permitted to apply to shape the colors. But my key questions are about the situation where one and only one of the two colors is auto. I'm not entirely clear what "If scrollbar-track-color computes to auto, and scrollbar-face-color is not auto, it is used to color the scrollbar track." means. I guess it means that, if only face is not auto, then the track color is the same as that of face. Is that your understanding? Would it be correct to say that the "used value" for the track property is determined from the computed value for the face property in that situation? Assuming the face and track colors match after applying this rule, then would the scrollbar be hidden, in the same way as it would when face and track colors match but neither computed value is auto. What if only track is not auto? >> What if the user has a high contrast theme for accessibility >> reasons? Does this override document colors? > > It follows the same color overriding settings that if > browser.display.document_color_use is the default value, and the > user is using high contrast theme, those properties would be > ignored during style cascading, and consequently no custom > scrollbar would be used. Sounds good, thanks. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: Scrollbar color properties
Is there a plan to avoid the contrast problems we have mixing document colors with system colors in other widgets? e.g. If one scrollbar color is specified by the document, then what ensures that other parts of the scrollbar are visually distinct? Does the computed value of the other scrollbar color change so that it contrasts with the specified color? I see some such code in GetScrollbarArrowColor(), but I haven't found something similar for track and thumb. Does something ensure the thumb will be visible? What if the user has a high contrast theme for accessibility reasons? Does this override document colors? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)
Is there a guideline that should be used to evaluate what can acceptably run in the same process for different sites? I assume the primary goal is to prevent one site from reading information that should only be available to another site? There would also be defense-in-depth value from having each site sandboxed separately because a security breach from one site could not compromise another. I guess a single compositor process is acceptable because there is essentially no information returning from the compositor? A font server may be acceptable, because information returned is of limited power? Use of system font, graphics, or audio servers is in a similar bucket I guess. Would using a single process for network be acceptable, not because information returned is limited, but because we're willing to have some compromise because there is a small API surface? Or would that be acceptable because content JS does not run in that process? Would it be acceptable to perform layout in a single process for multiple sites (if that were practical)? Would it be easier to answer the opposite question? What should not run in a shared process? JS is a given. Others? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Coding style: brace initialization syntax
On Fri, 13 Apr 2018 10:22:06 -0400, Boris Zbarsky wrote: > On 4/13/18 9:37 AM, Emilio Cobos Álvarez wrote: >> Would people agree to use: >> >> , mIsRootDefined { false } >> >> Instead of: >> >> , mIsRootDefined{ false } > > So my take is that we should not use braced initializer syntax in > constructor initializer lists. The reason for that is that it > makes it much harder to scan for where the constructor body > starts. Doubly so when ifdefs in the initializer list are > involved. Triply so when someone writes it as: > > explicit TTextAttr() > , mIsRootDefined > { > false > } > #ifdef SOMETHING > #endif > { > } > > which is what clang-format did in some of the cases in the patch > for bug 525063. > > In particular, I think this should have just used: > > , mIsRootDefined(false) One advantage of list-initialization over direct initialization is that narrowing conversions are prohibited in list-initialization. This is particularly useful in member initializer lists, where the types are not readily visible (which is another good reason for default member initialization). I guess that doesn't matter for bool initializers, but consistency ... Below, D will compile, but L will not. struct D { int i; D() : i(3.14) {} }; struct L { int i; L() : i{ 3.14 } {} }; One advantage of lack of space between identifier and brace-init-list is that it is visibly different from the space before the constructor body. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Removing tinderbox-builds from archive.mozilla.org
On Fri, 18 May 2018 13:13:04 -0400, Chris AtLee wrote: > IMO, it's not reasonable to keep CI builds around forever, so the question > is then how long to keep them? 1 year doesn't quite cover a full ESR cycle, > would 18 months be sufficient for most cases? > > Alternatively, we could investigate having different expiration policies > for different type of artifacts. My assumption is that the Firefox binaries > for the opt builds are the most useful over the long term, and that other > build configurations and artifacts are less useful. How accurate is that > assumption? Having a subset of builds around for longer would be more useful to me than having all builds available for a shorter period. The nightly builds often include large numbers of changesets, sometimes collected over several days, and so it becomes hard to identify which code change modified a particular behavior. I always use opt builds for regression testing, and so your assumption is consistent with my experience. I assume there are more pgo builds than nightly builds, but fewer than all opt builds. If so, then having a long expiration policy on pgo builds could be a helpful way to reduce storage costs but maintain the most valuable builds. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to ship: media-capabilities
Steven Englehardt writes: > While it may have been > theoretically possible for all trackers to gather statistics on video > playback for each configuration, the only scripts that could practically > carry out those attacks without degrading user experience would have been > video providers. This will be especially true if browsers start blocking > autoplay by default (https://bugzilla.mozilla.org/show_bug.cgi?id=1376321), > since users will never interact with media elements from fingerprinting > scripts. [...] > If autoplay is eventually blocked by default could we gate the response of > this API on user interaction with the media element? Note that current playback blocking work is focused on media elements that produce audio. Video-only elements would not be affected. The blocking is also gated by interaction with the document, rather than any particular element. (There may be other non-default behaviors available.) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Should web specifications try to describe object lifetimes? [was Intent to implement: AudioWorklet]
On Fri, 4 May 2018 14:32:20 -0400, Boris Zbarsky wrote: > On 5/4/18 3:34 AM, Karl Tomlinson wrote: >> Not sure I understand your question, but the observable behavior >> described by this section, or specifically "Before an AudioNode is >> deleted, it will disconnect itself from any other AudioNodes" > > Right. So that observably influences the sound produced or some > such, sounds like? That's correct. The connections influence the behavior of the "other AudioNodes". Depending on how the rest of the graph is assembled, the behavior change may or may not be heard. There are some graph designs where the behavior change will be heard. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Should web specifications try to describe object lifetimes? [was Intent to implement: AudioWorklet]
Boris Zbarsky writes: > So if there is in fact a problem still remaining, convincing Alex > that it's there is pretty valuable. Having him convinced it > doesn't exist is actively harmful as far as getting it fixed goes. Thank you. Alex sounds like a very valuable person to get on side. We may still have some time on our hands, but I'm not sure how long to pursue the WG about this before we aim to get Alex involved. Neither Chrome 66 nor current Chromium [[processor source code]] support AudioWorkletNodes with lifetimes based on input connections in a way that is compatible with the spec and so Chrome or the spec will change at some point. It would be nice, however, to resolve these issues before Chrome ships such changes. Hongchan has assigned himself to the [[AudioNode Lifetime]] spec bug, and so I'm happy for him to have a go, if he is planning to fix it. > If there is no difference in observable behavior, what would a > normative requirement mean? Good question. I will find that useful at some point. > Karl Tomlinson wrote: >> If the whole normative AudioNode lifetime section were dropped >> then this would clearly be an implementation issue rather than a >> spec issue. > > Assuming there is no observable behavior involved (other than > memory usage), right? Not sure I understand your question, but the observable behavior described by this section, or specifically "Before an AudioNode is deleted, it will disconnect itself from any other AudioNodes", is the source of observable behavior on GC. Removing this section would remove the core problem. The rest of the section describes conditions that would keep an AudioNode alive, and describes when AudioNodes are no longer required. These portions are not _necessary_, as nothing elsewhere indicates that AudioNodes should ever be removed, but there seem to be differing opinions on the importance of these clauses. (Compare e.g. [[PermissionStatus listeners]], [[WebSocket listeners]]) >> The history here is: > > Thank you for the summary. That was helpful. One thing I'm not > 100% clear on at this point: are there spec issues that can lead > to observable GC, or are we just talking about implementation bugs > that make GC observable now? Yes, the source of the observable GC problem is very much in the spec now, and tracked by the [[AudioNode Lifetime]] issue. Once that is resolved, I'm hoping that a spec change to support something like [[silence tracking]] can make AudioWorklet suitable for custom AudioNodes. [[processor source code]] https://chromium.googlesource.com/chromium/src/+blame/9d4a34cadf1b7d85f4ad0b709374aaaf59325a8d/third_party/WebKit/Source/modules/webaudio/AudioWorkletNode.cpp#110 [[PermissionStatus listeners]] https://github.com/w3c/permissions/issues/170 [[WebSocket listeners]] https://html.spec.whatwg.org/#garbage-collection-3 [[AudioNode Lifetime]] https://github.com/WebAudio/web-audio-api/issues/1471 [[silence tracking]] https://github.com/WebAudio/web-audio-api/issues/1453 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Should web specifications try to describe object lifetimes? [was Intent to implement: AudioWorklet]
Boris Zbarsky writes: > On 5/2/18 5:21 AM, Karl Tomlinson wrote: >> [[AudioNode Lifetime]] https://github.com/WebAudio/web-audio-api/issues/1471 > > I've read through that thread, but I'm still a little unclear on > where thing stand. With the latest proposal, can there be > observable situations where the output sound depends on GC timing? Thank you for taking a look, Boris. I'm quite unclear how any of the changes proposed in the [[March F2F resolution]] comment would resolve the issue, and so I expect not. [[March F2F resolution]] https://github.com/WebAudio/web-audio-api/issues/1471#issuecomment-376223916 > If so, that doesn't sound acceptable. It also sounds to me like > Alex Russell in > https://github.com/WebAudio/web-audio-api/issues/1471#issuecomment-362604396 > didn't believe there was an observable sound difference possible. > If one is possible, we should communicate this to him very > clearly The spec says that when a node is deleted, it will disconnect itself from other nodes. This disconnection definitely can lead to observable sound differences, if the node can be deleted. The spec is unclear on whether a node with observable connections can be deleted, but there is certainly reason to interpret it in this way and members of the working group were doing so. If anything, the March F2F resolution seems to reinforce this view. I was also quite unclear as to the point of Alex's comment. I didn't understand why he was highlighting "order of object tear-down", nor why he was implying that only "VERY fine-grained" knowledge was a problem. I'm not clear as to exactly what he was referring with "@joeberkovitz's change to cause the spec to not discuss GC", but changing the spec to not discuss GC would certainly resolve the issue. My take-away was that Alex was advocating no behavior changes on GC. He does seem to have misunderstood that there is a problem to fix. But I assume the way forward here is to help the WG find a solution that works. What is the advantage of explaining the situation to Alex? I'm happy to query the F2F resolution, but I wonder which is the best way to resolve this. Is having web specifications try to describe object lifetimes helpful, or is it just over-prescribing? Should specifications instead just focus on observable behavior, and leave it to implementations to optimize and to reclaim resources that are no longer required? Are you aware of any guidance I could reference, if advocating this? (The WG has been very keen to make this normative.) Perhaps it would be best to just have an informative section explaining design decisions made for the sake of making resource reclamation possible, such as lack of graph introspection. Perhaps it would also be helpful to have some informative reminders to implementations re what GC must not affect. If the whole normative AudioNode lifetime section were dropped then this would clearly be an implementation issue rather than a spec issue. The history here is: Initially the #lifetime-AudioNode section was only informative, providing some hints to the implementation re situations when GC must not be performed. These were not a complete list. At the same time, implementations had bugs that made GC observable, but this was not usually noticed in general Web Audio usage. These implementation bugs were fixable. An API/model was designed to support [[dynamic AudioWorkletNode lifetime]]. While not directly saying so at the time AFAIK, this API/model essentially depends on the implementation bugs to work as intended. This was stated much [[later]]. WG pressure in response to the [[dynamic AudioWorkletNode lifetime]] led to the #lifetime-AudioNode section being made [[normative]]. Subsequent additions were made to make it more comprehensive. This made the existing implementation bugs very much spec bugs. The bugs are now very noticeable in typical AudioWorkletNode usage. [[dynamic AudioWorkletNode lifetime]] https://github.com/WebAudio/web-audio-api/pull/959#issuecomment-260979231 [[later]] https://github.com/WebAudio/web-audio-api/issues/1453#issuecomment-349340594 [[normative]] https://github.com/WebAudio/web-audio-api/pull/1161#issuecomment-284499700 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Intent to implement: AudioWorklet
=Summary/benefits: "The AudioWorklet object allows developers to supply scripts (such as JavaScript or WebAssembly code) to process audio on the rendering thread, supporting custom AudioNodes." [[Concepts]] Allowing scripts to process audio on the rendering thread is important for low latency general client-side generation or processing of audio, free from problems of main thread delays. This is what game developers, for example, have wanted for some time. Other parts of the Web Audio API may have been presented in the past as solutions, but they were not a good fit in general. A MessageChannel permits, for example, decoding on a web worker thread and delivery on an audio thread, with no main thread influence. Custom AudioNodes would be important as a way for developers to extend the Web Audio API. Hopefully we may no longer even need to add more special-purpose nodes to the API surface. See, however, Usability/interoperability concerns below. Some work has already landed in Gecko, but I'm not aware of a previous explicit intent to implement. =Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1062849 =Link to standard: https://webaudio.github.io/web-audio-api/#audioworklet =Platform coverage: Desktop + Android. =Estimated or target release: When ready, which may require resolution of GC/interoperability concerns below. =Preference behind which this will be implemented: dom.audioWorklet.enabled and dom.worklet.enabled =Is this feature enabled by default in sandboxed iframes? No. =If not, is there a proposed sandbox flag to enable it? Yes, "allow-scripts". =DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1458445 =Do other browser engines implement this? Blink: since Chrome 66, Opera 53. https://www.chromestatus.com/feature/4588498229133312 Edge: bug assigned. https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/15812544/ Webkit: no indication. https://bugs.webkit.org/show_bug.cgi?id=182506 =web-platform-tests: https://github.com/w3c/web-platform-tests/tree/master/webaudio/the-audio-api/the-audioworklet-interface =Secure contexts: Yes. =Usability/interoperability concerns: AudioNodes are often typically set up, scheduled, and forgotten. Once they have finished what they have been scheduled to do and upstream nodes have also finished, associated resources can be reclaimed, but some effects on downstream nodes remain. AudioWorkletNode, as currently specified, OTOH, is different. There is provision through an [[active source]] flag for an AudioWorkletProcessor to indicate that if there are no further inputs, then it no longer needs to perform further processing. However, the client needs to disconnect the inputs when finished. If the input nodes are forgotten (as is typical), then processing continues repeatedly and indefinitely (unless the whole AudioContext is stopped). The need for clients to keep track of whether inputs to these nodes have finished makes AudioWorkletNodes with inputs second class nodes in practice. A solution based on silent input rather than connection count was proposed in https://github.com/WebAudio/web-audio-api/issues/1453 but this appears to have been rejected. It seems that Chrome works around this by choosing to garbage collect input nodes even when their presence is specified to require (observable) AudioWorkletProcessor.process() calls. This garbage collection is performed in a way that causes the process() calls to be halted (which stops sound production), and so the AudioWorkletProcessor can subsequently also be garbage collected if there are no rooted references, as usual. Having the behavior of AudioWorkletProcess depend on whether or not the client maintains references to input nodes is not something I'd like to implement. It would be comparable to an audio element stopping playback at a time when an implementation chooses to perform garbage collection after the client has removed its last reference. It is contrary to [[TAG design principles]]. The Chrome approach seems to be based on a different understanding of [[AudioNode Lifetime]]. Because Chrome reclaims CPU and memory resources even when the client does not disconnect inputs from AudioWorkletNode, authors are likely to forget to track input nodes, in which case their applications will have performance problems in implementations with deterministic behavior. =Security/privacy concerns: The audio thread runs with reasonably precise timing, providing a clock edge. The additional surface from AudioWorklet is that script can run on the audio thread, at the precise time, and send messages to other threads, rather than merely having browser-messages from the audio thread to the main thread. This may be mitigated to some extent by slightly reduced precision from [[AudioIPC]] and perhaps by using tail dispatch for messages from the audio thread. [[Concepts]] https://webaudio.github.io/web-audio-api/#AudioWorklet-concepts [[active source]]
Re: Please do not use GetNativePath and GetNativeTarget in XP code and Windows-specific code
> On Thu, Nov 30, 2017 at 11:09:07AM +1300, Karl Tomlinson wrote: >> The native bytes may not be valid UTF-8, and so if the >> character encoding is UTF-8, then there may not be a valid >> `path` that can be encoded to produce the same `nativePath`. Kris Maglione writes: > I think you mean "is not UTF-8"? If the bytes in the native filename are not valid UTF-8, then they cannot be decoded as UTF-8. When the character encoding specified by the current locale is UTF-8, or sometimes even if not (e.g. OS.File), attempts are made to interpret the filename as UTF-8 for `path`. These attempts may fail. Perhaps similar issues may exist with other assumed encodings, but UTF-8 is the common one. If anyone were to ever want to try to display these filenames, then the reference would be https://developer.gnome.org/glib/stable/glib-Character-Set-Conversion.html#g-filename-display-name There is some indication of what this does at https://developer.gnome.org/glib/stable/glib-Character-Set-Conversion.html#g-get-filename-charsets > And for converting file paths to URIs, we should always be using > NewFileURI. It looks like that tries to use the UTF-16 path, > converted to UTF-8, but falls back to the native path if a > round-trip from and to the native charset doesn't produce the > same path. I don't know what using `path` in URIs is trying to support. Other applications use `nativePath` and so that is better for IPC such as drag'n'drop and clipboards. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Building mozilla-central with clang + icecream
Zibi Braniecki writes: > On Tuesday, November 7, 2017 at 2:54:45 AM UTC-8, pa...@paul.cx wrote: >> I'm using this setup daily (with clang trunk from some weeks ago, not >> 5.0, but it's the same really), here is my mozconfig: >> >> ``` >> export CC="icecc clang" >> export CXX="icecc clang++" >> mk_add_options MOZ_MAKE_FLAGS="-j100" # adjust, this is good for the >> paris office >> mk_add_options 'export RUSTC_WRAPPER=sccache' >> mk_add_options 'export CARGO_INCREMENTAL=1' >> ``` >> >> Cheers, >> Paul. > > Yeah, tried the same thing and got: > 0:12.70 DEBUG: Executing: `/usr/bin/ccache icecc /usr/bin/clang -std=gnu99 -c > /tmp/conftest.E5oBtT.c` > 0:12.70 DEBUG: The command returned non-zero exit status 127. > 0:12.70 DEBUG: Its error output was: > 0:12.70 DEBUG: | usr/bin/clang: error while loading shared libraries: > libLLVM-5.0.so: cannot open shared object file: No such file or directory > 0:12.70 DEBUG: | ICECC[8354] 18:03:45: Compiled on 10.251.25.45 Don't know whether this would result in "No such file or directory", or whether the clang and llvm could have been compiled for separate systems, but clang and llvm need to both be compiled with the same C++ ABI, either with C++11 compatibility or not. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: How can I get SOCKS settings params of Firefox?
I don't know how well GConf is supported by more recent GNOME versions. I assume GSettings support was added to nsUnixSystemProxySettings because GConf was to be no longer supported, but the crash reporter uses separate code. https://bugzilla.mozilla.org/show_bug.cgi?id=1388897 http://searchfox.org/mozilla-central/rev/8a6a6bef7c54425970aa4fb039cc6463a19c0b7f/toolkit/system/unixproxy/nsUnixSystemProxySettings.cpp#445 A potentially better solution is proposed in https://bugzilla.mozilla.org/show_bug.cgi?id=1333125 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Bulk Closing Intermittents
I assume this was integrated with OrangeFactor? That is the only way I know to determine whether an intermittent failure has occurred, because failures are not necessarily reported to bugzilla. Is there a mechanism for tracking a failure that we intend to addresss, even when it does not fail every 21 days? Would that be filing another bug without the intermittent-failure keyword? Emma Humphries writes: > This was the first time we bulk closed these bugs, and there will be some > glitches. I don't consider this to be a blocker on continuing this work. > > Next time we do this, it won't be 5,000+ bugs. OrangeBot runs on Sundays, > so we can do the cleanup on Monday. > > The long term goal is to stop using Bugzilla to record every intermittent > test failure and only use it for test failures we intend to address. > > -- Emma > > On Mon, Jul 10, 2017 at 5:29 AM, Kartikaya Guptawrote: > >> It might be a good idea to integrate this process with the >> OrangeFactor Robot, to avoid race conditions like what happened on bug >> 1328486 (it was bulk-closed, and then a couple of hours later the OF >> robot reported that there were two failures this week - but the bug >> remained closed). >> >> Cheers, >> kats >> >> On Fri, Jul 7, 2017 at 10:35 PM, Emma Humphries wrote: >> > As discussed earlier, Joel and I have kicked off a process to close >> > intermittent test failures in Bugzilla. >> > >> > If a test associated with a bug does not fail in 21 days, we'll close the >> > bug as RESOLVED:INCOMPLETE. >> > >> > The first batch of intermittent bugs to close has 5,130 tickets. I have a >> > script to close these, but to close these without bug spam requires DBA >> > intervention. >> > >> > I'd like to run the closures over the weekend but that's going to create a >> > non-trivial amount of bug spam for some of you. >> > >> > There is a way to get rid of the bug spam! >> > >> > Every bug we close will have the keyword `bulk-close-intermittents` added >> > to it. >> > >> > If you search for messages from `bugzilla-dae...@mozilla.org` containing >> > `bulk-close-intermittents` you can isolate, review, and remove those >> > messages. >> > >> > Thank you for your patience while we all work to get the noise out of >> > Bugzilla so we can find the strong signals on what we must focus on to >> > deliver Firefox 57 in November. >> > >> > -- Emma >> > ___ >> > dev-platform mailing list >> > dev-platform@lists.mozilla.org >> > https://lists.mozilla.org/listinfo/dev-platform >> ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is it OK to make allocations that intentionally aren't freed?
Andrew McCreight writes: > On Fri, May 19, 2017 at 7:38 PM, Nicholas Nethercote> wrote: > >> There's also a pre-processor constant that we define in Valgrind/ASAN/etc. >> builds that you can check in order to free more stuff than you otherwise >> would. But I can't for the life of me remember what it's called :( >> > > NS_FREE_PERMANENT_DATA. I understand the need to clean up objects for NS_BUILD_REFCNT_LOGGING, but how important is this for objects not tracked with NS_BUILD_REFCNT_LOGGING, but only MOZ_VALGRIND and/or MOZ_ASAN? I thought I'd seen enough unfreed allocations that can be referenced from a root in static storage, that I assumed that valgrind and ASAN ignored these allocations in their leak report. Is the inclusion of MOZ_VALGRIND and MOZ_ASAN in the value of NS_FREE_PERMANENT_DATA merely to make leak detection easier for valgrind and ASAN, or will such allocations be reported as leaks by these tools? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Improving visibility of compiler warnings
On Sat, 20 May 2017 14:59:11 -0400, Eric Rescorla wrote: > On Sat, May 20, 2017 at 1:16 PM, Kris Maglione> wrote: > >> On Sat, May 20, 2017 at 08:36:13PM +1000, Martin Thomson wrote: >> >>> Hmm, these are all -Wsign-compare issues bar one, which is fixed >>> upstream. We have an open bug tracking the warnings >>> (https://bugzilla.mozilla.org/show_bug.cgi?id=1307958 and specifically >>> https://bugzilla.mozilla.org/show_bug.cgi?id=1212199 for NSS). >>> >>> NSS is built with -Werror separately, but disables errors on >>> -Wsign-compare. Disabling those warnings for a Firefox build of NSS >>> wouldn't be so bad now that we share gyp config. Based on a recent >>> build, that's 139 messages (add 36 if you want to add nspr). >>> >> >> Is there a particularly good reason that NSS needs to disable >> -Wsign-compare? That seems like a footgun waiting to happen, especially in >> crypto code. > > > Like many other pieces of old code, there are a lot of things that trigger > compiler warnings which we have been progressively removing, but we > haven't removed these yet. Ideally we would have some tooling that > would distinguish new warnings from old warnings, but absent that, > until we fix all the existing warnings it's either disable -Werror or > disable this particular warning. It's not something we're particularly > happy about. -Wno-error=sign-compare is also an option, but a wall of sign-compare warnings may not be helpful in general. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using references vs. pointers in C++ code
Nathan Froyd writes: > I think a broader definition of "POD struct" would be required here: > RefPtr and similar are technically not POD, but I *think* you'd > want to require RefPtr* arguments when you expect the smart pointer > to be assigned into? Not sure. Yes, please, for similar reasons as for the primitive types. >On Tue, May 9, 2017 at 10:39 AM, Boris Zbarskywrote: >> But for object-typed things like dom::Element or nsIFrame, it seems better >> to me to pass references instead of pointers (i.e "Element&" vs "Element*") >> for what are fundamentally in params, even though the callee may call >> mutators on the passed-in object. That is, the internal state of the >> element may change, but its identity will not. Nathan Froyd writes: > I get this argument: you want the non-nullability of references, but > you don't want the verboseness (or whatever) of NonNull or similar, > so you're using T& as a better T*. I think I would feel a little > better about this rule if we permitted it only for types that deleted > assignment operators. Not sure if that's really practical to enforce. With convention to use references to objects for methods that change the state of the object, the advantages are R1. Nullability is indicated differently from NotNull. R2. Parameters which may change the identity of an object can be distinguished at the call site from those that may change only state. With convention to use pointers to objects for methods that change the state of the object, the advantages are P1. Parameters which change the state of an object can be distinguished at the call site from those that don't. P2. Consistency with primitive types. My experience is that for understanding while reading code, it is more helpful to distinguish where state may be modified than to distinguish what may be null. One particular case is where a method transfers state from one object to another. At the call site it is useful to distinguish source from destination. For object types, change in identity is rare (non-existent for ref-counted objects) and so there is rarely a need to distinguish this from state change. Typically identity changes only when the pointer changes. Doesn't NotNull provide that pointers can have the same advantages as references re visibility of nullability where desired, so we can have the best of both? Or is NotNull really too awkward IRL? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
firefox-dev posting [was Re: unowned module: Firefox::New Tab Page, help me find an owner]
Justin and I did some investigation and weren't able to reproduce the problems people were encountering last year. We confirmed that it is not necessary to subscribe before posting. If the sender is not whitelisted, then messages receive an immediate reply with the subject "Your message to firefox-dev awaits moderator approval". Messages caught by the spam filter do not receive this reply. If a message does not get posted on the list and there is no moderation auto-reply, then it has gone missing, perhaps filtered with the spam. firefox-dev-owner is the official contact if there is a problem. On Thu, 23 Mar 2017 00:55:19 -0700, Justin Dolske wrote: > Can you follow up with me in separate email about what posts you're > concerned about? > > I've done most of the rejection moderation on the list, and usually reply > to people explaining why and where a better venue would be. Most stuff that > has been moderated is exactly what's listed there as being off-topic, and > that's fairly uncommon. Obvious spam gets silently plonked, but that's > presumably not what you're talking about. > > This thread, had it been posted to firefox-dev, would very clearly be > on-topic and not rejected. > > Justin > > On Wed, Mar 22, 2017 at 7:35 PM, Karl Tomlinson <mozn...@karlt.net> wrote: > >> Benjamin Smedberg writes: >> >> > This is not the list for this question. Please respect this question to the >> > firefox-dev list. >> >> https://wiki.mozilla.org/Firefox/firefox-dev says "Anyone can post. By >> default, posts will be reviewed by a moderator before being sent to the >> list." >> bit in fact some unclear set of posts are redirected to /dev/null, even >> when >> contributing to the goal and abiding by the rules. >> ___ >> dev-platform mailing list >> dev-platform@lists.mozilla.org >> https://lists.mozilla.org/listinfo/dev-platform >> ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: unowned module: Firefox::New Tab Page, help me find an owner
Benjamin Smedberg writes: > This is not the list for this question. Please respect this question to the > firefox-dev list. https://wiki.mozilla.org/Firefox/firefox-dev says "Anyone can post. By default, posts will be reviewed by a moderator before being sent to the list." bit in fact some unclear set of posts are redirected to /dev/null, even when contributing to the goal and abiding by the rules. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Project Stockwell (reducing intermittents) - March 2017 update
> It'd make me feel slightly less sad that we're disabling tests > that do their job 90% of the time... The way I interpret a test failing 10% of the time is that either it has already done its job to indicate a problem in the product, or the test is not doing its job. Either way, if it is not going to be actively addressed, then the value in continuing to run the test is questionable. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is there a way to improve partial compilation times?
zbranie...@mozilla.com writes: > * I still have only 8GB of ram which is probably the ultimate > limiting factor You are right here. RAM is required not only for link time, but also when compiling several large unified files at a time (though perhaps this is not so significant with only 4 cores), and for caching files between phases. I notice build time and system responsiveness affected each time a 4GB memory card fails leaving that machine with only 12GB of RAM. There is often a web browser running on the same machine though. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Project Stockwell (reducing intermittents) - March 2017 update
I would like to see failure rates expressed as a ratio of failures to test runs, but I recognise that this data may not be readily available and getting it may not be that important if we have a rough idea. These are a means for setting priorities, and so a rank works well. If we have 100 tests, each with an expected failure rate of 3%, then the chance of a clear run is less than 5%. 3% is not an acceptable failure rate for a single test IMO. It does get harder to reproduce as the failure rate reduces, but logging can be added and inspected on the next failure, which will not be far away. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Please don't abuse "No bug" in commit messages
Gijs Kruitbosch writes: >> What if it causes a regression and a blocking bug needs to be filed? > Then you file a bug and needinfo the person who landed the commit > (which one would generally do anyway, besides just marking it > blocking the regressor). I find the association of multiple regressions with their cause very helpful because one regression is often similar to another. Bugzilla provides the link from cause to effect, a link direction which is not in the changeset history. Where there is no bug for the cause, then these links are missing. I'm not convinced that a "Changeset X bug" filed after the fact would be easy to find from the commit message. > More generally, I concur with Greg that we should pivot to having > the repos be our source of truth about what csets are present on > which branches. I've seen cases recently where e.g. we land a > feature, then there are regressions, and some of them are > addressed in followup bugs, and then eventually we back everything > out of one of the trains because we can't fix *all* the > regressions in time. At that point, the original feature bug's > flags are updated ('disabled' on the branch with backouts), but > not those of the regression fix deps, and so if *those* have > regressions, people filing bugs make incorrect assumptions about > what versions are affected. Manually tracking branch fix state in > bugzilla alone is a losing battle. A system for tracking related changes and branch state that can have mistakes corrected is better than no system at all. (I'm not clear whether you think the information currently in the repos is sufficient.) For exmample, using changeset history would be a cumbersome way to determine whether the most recent change for one issue on a branch was a "fix" or a back-out. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: autoland cleanup
Gregory Szorc writes: > On Wed, Nov 30, 2016 at 12:40 PM, Karl Tomlinson <mozn...@karlt.net> wrote: >> When history is rewritten, is there a way to view the original >> history through the web interface, so that autoland tinderbox >> builds can be used to find regression ranges? > > No. Rewritten changesets should result in "replacement" automation results. > So there should be no loss of automation coverage. So there should be no > major impact to things like regression range searching. Are the "replacement" results tests re-run with builds from the new revisions? Or are the results copied from results for the previous revisions? If so, is this detectable in treeherder somehow? >> https://hg.mozilla.org/integration/autoland/rev/04a37d89a469 >> says "An error occurred while processing your request" > > This is likely bug 1321344. Ugh, it looks like that issue is more > widespread than I thought. I'll try to fix it in the next hour. I thought the "no" answer above would indicate changesets that have been rewritten would no longer be visible through the URLs such as above, but would you expect rewritten changsets to remain visible on these URLs (if bug 1321344 were fixed)? >> Is it necessary to pull autoland and use the evolve extension? > > It isn't necessary to have evolve enabled to pull the autoland repo. But if > evolve is enabled, rewritten changesets that you have pulled from the > autoland repo will disappear automatically. Otherwise your local repo will > keep accumulating old heads now have since been rewritten on the autoland > repo. Thanks. Just to be clear, will pulling from autoland pull old heads that have been rewritten? (i.e. I'm not clear whether the accumulating old heads need to be pulled before the rewrite.) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: autoland cleanup
Gregory Szorc writes: > When the autoland repository was introduced, it was advised to not pull > from this repository because we plan to do rewrites like this frequently in > the future. So if this rewriting impacted your local repo and you aren't a > sheriff, you should consider changing your workflow to not rely on pulling > the autoland repo. Thank you for cleaning up the accidental delete/adds. Do you think it would be worth blocking commits from git that add and remove files to avoid this happening in future? This is not the first time this has happened. Bug 1313432 (automv) may help on autoland, but the problem is not specific to autoland. When history is rewritten, is there a way to view the original history through the web interface, so that autoland tinderbox builds can be used to find regression ranges? https://hg.mozilla.org/integration/autoland/rev/04a37d89a469 says "An error occurred while processing your request" Is it necessary to pull autoland and use the evolve extension? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to restrict to secure contexts: navigator.geolocation
Aryeh Gregor writes: > On Tue, Oct 25, 2016 at 8:12 PM, Anne van Kesterenwrote: >> The basic problem is prompting the user at all for non-HTTPS since >> that leads them to think they can make an informed decision whereas >> that's very much unclear. So prompting more would just make the >> problem worse. >> >> We want to get to a place where when we prompt the user on behalf of a >> website we have some certainty who is asking the question (i.e., >> HTTPS). > > By that logic, we should not permit users to submit forms to non-HTTPS > either. I guess that would be ideal, but there are some situations where it doesn't matter if the world sees the form data. Similarly there may be some situations where the user is happy for the world to know their location. The UA just needs to make it clear who can see this information and for how long. This is, however, assuming the user will make a reasonable decision based on that info. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Keyboard scan codes on Linux
On Fri, 19 Aug 2016 17:05:30 -0400, Eric Shepherd wrote: > I'm trying to update the table of scan codes and the keys they go with here: > > https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/code#Code_values_on_Linux_(X11)_(When_scancode_is_available) > > But the values in that table for the hardware keycodes seem to have no > relation whatsoever to any of the published tables of hardware key > values I can find anywhere. Is that table completely wrong, or for a > version of Linux that no longer exists? :) I suspect the CODE_MAP_X11 native values derive from https://cgit.freedesktop.org/xkeyboard-config/tree/keycodes/evdev I don't know however whether there is much value in publishing these values. I guess someone could run xev to get the values, but javascript to display the .code value would provide the more direct answer. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Making try faster for debugging intermittents
William Lachance writes: > As part of a larger effort to improve the experience around > debugging intermittents, I've been looking at reducing the time it > takes for common "try" workloads for developers (so that > e.g. retriggering a job to reproduce a failure can happen faster). > Also, accounts of specific try workloads of this type which are > annoying/painful would be helpful. :) I think I have a rough idea > of the particular type of try push I'm looking for (not pushed by > release operations, at least one retrigger) but it would be great > to get firsthand confirmation of that. One thing that might be helpful is enabling running only tests on try with a designated build that has already been created. Often tests are modified to add logging, after which the same build could be run with the new version of the test, thus saving waiting for a build. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Tier-1 for Linux 64 Debug builds in TaskCluster on March 14 (and more!)
> Could the lack of failure emails be specific to taskcluster jobs? https://bugzilla.mozilla.org/show_bug.cgi?id=1275774 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Tier-1 for Linux 64 Debug builds in TaskCluster on March 14 (and more!)
Could the lack of failure emails be specific to taskcluster jobs? https://treeherder.mozilla.org/#/jobs?repo=try=a8c6ab15dd8f ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Updating 32-bit Windows users to 64-bit Windows builds?
Lawrence Mandel writes: > Do we need this criteria? > > RAM - Does it hurt to move an instance that has <4GB? Yes. OOM will be more common with 64-bit builds on systems with less RAM because 64-bit builds use more memory. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Can we remove nsIEntityConverter?
Cross-posting to mozilla.dev.tech.mathml so that this is seen by people who are interested. Please follow-up to mozilla.dev.platform. Henri Sivonen writes: > We ship data tables for converting from Unicode to HTML entities. > These tables obviously take space. (They are not optimized for space > usage, either.) As far as I can tell, these tables are not used at all > in Fennec. In desktop Firefox, these data tables are used only for the > MathML View Source feature. > > Additionally, a subset of the tables is used by some XPCOM-based > extensions, but those extensions seem to be obsolete or abandoned or > don't seem to be using the feature for a very good reason. > > These data tables are not exposed to the Web Platform. > > In https://bugzilla.mozilla.org/show_bug.cgi?id=1048191 I proposed > removing this for mobile only, but how about we just remove this > altogether in order to make both Fennec and desktop Firefox smaller? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: IIRFilterNode
Thanks for the replies, Dan and Roy. A first order filter node with AudioParam inputs seems a likely future addition AFAIK. Even with that though, having a way to apply a custom biquad without needing to decompose into multiple textbook filters is useful I think. And I agree that implementing this with AudioParams seems unnecessary and risky wrt stability. So I think the IIRFilterNode will still be useful as a basic building block for authors (and that is the kind of node that Web Audio should be providing). The discussion re extending BiquadFilterNode for this has passed but there is an elegance in the generality of IIRFilterNode. Daniel Minor writes: > In this case, my plan > was to import the blink IIRFilter as utility code as we've done in the > past, so I don't think supporting the IIRFilterNode will cost us much time > or effort and keeps us compatible with the spec. I wanted to think this through now because, once this is shipped, we can't take it back, so the cost is ongoing. When Gecko has used Blink or Webkit's implementation in the past, the WG has sometimes used this to argue that quirks in multiple existing implementations should be maintained even if not desirable. Sometimes we just live with the quirks. Other times the nodes have been deprecated and replaced, but we still need to continue to support the old nodes for backward compatibility. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Static analysis for "use-after-move"?
Xidorn Quan writes: > I think this specific case should actually use UniquePtr& rather > than && in parameter for conditional move, so that callsite can only pass > in an lvalue, and we don't need a Move there. Jim Blandy writes: > TakeMediaIfKnown is accepting a > UniquePtr as an inout parameter: it uses, and may modify, its > value. It should take UniquePtr &. > > UniquePtr.h disagrees with me: > > * ... To conditionally transfer > * ownership of a resource into a method, should the method want it, use a > * |UniquePtr&&| argument. UniquePtr* would make it clear at the call site that something out of the ordinary can happen. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: IIRFilterNode
Daniel Minor writes: > Summary: This provides an alternative to using BiquadFilterNode when > odd-order filters are required or automation is not needed. It is part of > the Web Audio spec and is already implemented in Blink. Thanks for looking at this, Daniel. I fear that high order filters are going to have problems due to numerical round-off, as pointed out in https://github.com/WebAudio/web-audio-api/issues/323#issuecomment-60545047 Do you know whether that is going to be a general problem or whether many high order filters will be OK in practice? There is some discussion here but my understanding is limited: http://signal.ece.utexas.edu/~arslan/courses/dsp/lecture25.ppt There was some tinkering of test output precision for up to 4th order filters in patch sets 7, 8, 9, and 19 of https://codereview.chromium.org/1361233004/ The Blink implementation doesn't factor into low-order filters, and I expect it would be quite some work to do this. I wonder whether it may be that the IIRFilterNode should be used only for first and second order filters and higher order filters should be composed as a series of these? IIRFilterNode is probably an OK solution for first-order and custom biquad filters, as long as we don't then need a pole/zero filter node with AudioParam parameters because someone wants automation (as already requested for and available with BiquadFilterNode). Then we'd have yet another filter node, which would be unfortunate. Do you think this is likely? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: How is libc shared object chosen in libxul?
libxul.so is dlopen()ed from the firefox process. That firefox process would already have a libc.so.6, I assume, and so I would not expect libxul.so to load another libc.so.6. That seems to be confirmed here by running strace -e trace=file firefox/firefox See whether something different is happening on your system. Aaron Klotz writes: > This question is better suited to the dev-platform audience. > > On 4/24/2016 1:50 PM, john smith wrote: >> Hello, >> >> Today I installed musl C x86-64 library next to GLIBC on Linux 86-64 >> system. It has been installed to /usr/lib/libc.so. I didn't expect >> any problems with that, in fact all programs were running fine but >> Firefox couldn't start. The reason is (I think) that /usr/lib/libc.so >> was chosen by XUL for foreign function call instead of >> /lib64/libc.so.6. This is what gdb says: >> >> Program received signal SIGSEGV, Segmentation fault. >> 0x7fffc0f80b82 in strerror_l () from /usr/lib/libc.so >> (gdb) bt >> #0 0x7fffc0f80b82 in strerror_l () from /usr/lib/libc.so >> #1 0x731299a4 in ffi_call_unix64 () from >> /usr/lib64/firefox-45.0.2/libxul.so >> #2 0x73129050 in ffi_call () from >> /usr/lib64/firefox-45.0.2/libxul.so >> >> Segmentation fault is not strange in such situation, it's normal when >> a dynamic linker is in different version than libc. After removing >> musl XUL picked /lib64/libc.so.6 as shown here albeit for a different >> C function: >> >> (gdb) bt >> #0 0x76cb8150 in _xstat () from /lib64/libc.so.6 >> #1 0x731299a4 in ffi_call_unix64 () from >> /usr/lib64/firefox-45.0.2/libxul.so >> #2 0x73129050 in ffi_call () from >> /usr/lib64/firefox-45.0.2/libxul.so >> >> I cloned gecko-dev repository but I didn't anything related to this >> issue. Can someone explain what happened here? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: MozReview's interdiffs
Thanks for the info, Mark. In the mean time, at least, it would make interdiffs easiest to read if patch authors can submit updates to patches against the same revision as the original patches. If that causes too much inconvenience (and it will sometimes), then separate pushes for the rebase and other changes to the patches should make it easier to follow what has happened. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Why is Mozreview hassling me about squashed commits?
Eric Rescorla writes: > I don't believe I am asking for this, just auto-squash on submit. I > certainly understand if it's your position that you have higher priorities, > that's fine, but it's not fine to remove the ability to do squashed reviews > before something like that lands. Perhaps the distinction between the processes here is that it is typical in Mozilla process to attach several logical changes to one bug report if they are all required to fix the single bug. Filing dozens of bugs can become unnecessary overhead if changesets are going to land together anyway, but there can be advantages to multiple bug reports, when backouts or uplifts are required, for example. I assume none of us want different logical changes in one changeset. This makes reviewing and regression tracking harder. [1] auto-squash on submit of course would not be fine for different logical changes on one bug report, without some mechanism to describe which changes are evolution-during-development, to be squashed, and which are separate logical changes, to remain separate. The current model relies on the submitter to make the distinction. Often it is helpful for reviewers to be able to see the final state, including all logical changes. Being able to do that is one of the benefits of push-to-review systems. I assume the intention was not to remove that view, but to remove implication of a single review request for all logically-independent changes. [1] https://secure.phabricator.com/book/phabflavor/article/writing_reviewable_code/#many-small-commits ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to enable e10s by default when running tests locally
Felipe G. writes: > Yeah, --e10s enables e10s in the browser for mochitest-chrome. However, > the test harness is a .xul file opened in a tab, and that runs that tab as > non-remote, so for most tests it ends up testing the same thing as not > using --e10s. Other tabs and/or windows opened manually by the test would > be remote. What makes that first tab non-remote? I had guessed that documents from "chrome:" would run in the browser process (by default, at least), but is it all determined by attributes on a browser element? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Just Autoland It
Honza Bambas writes: > On 1/25/2016 20:23, Steve Fink wrote: >> For navigation, there's a list of changed files at the top >> (below the fixed summary pane) that jumps to per-file anchors. > > Not good enough for review process. > >> Are you saying you want tabs or something for this (like >> splinter uses)? I'd certainly like something less sluggish, but >> maybe that's just my browser again. > > Yes please. Having one file on the screen at a time is very > useful. The next/previous file/comment keyboard shortcuts may be useful in the meantime. https://www.reviewboard.org/docs/manual/2.5/users/reviews/reviewing-diffs/#keyboard-shortcuts ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Just Autoland It
Boris Zbarsky writes: > On 1/23/16 9:48 PM, Mike Hommey wrote: >> Note that if /other/ changes from other bugs have happened to the same >> files between the last reviewed iteration and the rebase before landing, >> the interdiff will show them without any kind of visual cues. > > Ah, that's unfortunate. > > I agree that this is a hard problem, though (interdiff across > rebases). I guess that does mean that you can't really use the > final attached thing for the "I want to see the interdiff" use > case; need to push something to MozReview before rebasing to > address that. Yes, from the reviewing efficiency perspective, it is best to push MozReview updates for re-review on the same base revision. i.e. separate from the rebase/landing process. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Just Autoland It
>> On 25/01/16 05:44 PM, Eric Rescorla wrote: >> >On Mon, Jan 25, 2016 at 1:58 PM, Mike Hommeywrote: >> > >> >>It's also painful to use MozReview's comment system. The comments in the >> >>reviews pane don't show much diff context, and while I just realized >> >>it's possible to make it show more by hovering the diff part for a >> >>little while, it's not really great, as it doesn't actually show a diff >> >>view like the diff pane does, and switching to the diff pane a) is slow >> >>for large diffs and b) has an entirely different comment UX that doesn't >> >>seem really great either. >> >> >> > >> >Indeed. It would be great if it would just include 5-8 lines of context by >> >default. > On Mon, Jan 25, 2016 at 06:15:10PM -0500, Andrew Halberstadt wrote: >> It's not terribly obvious, but instead of clicking on a line number you can >> click and drag on the numbers to set the exact amount of context you want. Mike Hommey writes: > Which is something for the person writing the comment to do. Yes, please. > Also, even when they do that, most of the time, that won't > contain the surrounding code. AFAICT it is a diff view, and, as you say, hovering provides access to the rest of the function/file, but the animation is quite slow. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Just Autoland It
On Fri, 22 Jan 2016 14:24:38 -0500, Ehsan Akhgari wrote: > What about the case where the information doesn't exist in the > repository because the author, for example, cherry-picked a > specific commit on a throw-away branch because the rest of the > dependencies are still being worked on? Or, as another example, > what if the patch needs to be checked in only after another patch > (perhaps done by a different author) that is not connected to the > review at hand? I assume that, if the patches have a dependency on other work, then that would be explained to the reviewer, so that they can know how the patch will work. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Just Autoland It
Mike Hommey writes: > On Mon, Jan 25, 2016 at 11:23:59AM -0800, Steve Fink wrote: >> Heh. Your list of UI complaints is very similar to mine. Some comments: >> >> >> On 01/25/2016 04:26 AM, Honza Bambas wrote: > Also, iirc, when you reply diff comments in MozReview, the resulting > comments sent to bugzilla lack context (but I could be misremembering). I think you are remembering splinter here. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: nsThread now leaks runnables if dispatch fails
Xidorn Quan writes: > On Thu, Jan 7, 2016 at 6:47 AM, Karl Tomlinson <mozn...@karlt.net> wrote: >> Xidorn Quan writes: >> >>> You can keep a raw pointer yourself, and release it manually after you >>> find the dispatch fails, like what is done in >>> NS_DispatchToCurrentThread [1]. It is ugly, but it works and safe, >>> because Dispatch guarantees to leak when it returns failure code for >>> any async dispatch. >>> >>> [1] >>> https://dxr.mozilla.org/mozilla-central/rev/29258f59e5456a1a518ccce6b473b50c1173477e/xpcom/glue/nsThreadUtils.cpp#162-171 >> >> I think it would be much better to keep the sane ref-counting of >> the original implementation and add a different method for use when the >> thread is known to be in a state to accept new runnables but this >> method will assert and leak if the assumption was invalid. > > No, that would make the race condition inevitable. The race condition is not inevitable, but it may exist where code is not managing thread/object lifetimes appropriately. The "different" method can be used where there is risk of that happening. The race condition only matters when the runnable holds references to objects that must be released on a certain thread. These situations need to be managed by the calling code regardless of whether the Dispatch() method would leak or not on failure to manage the object appropriately. Passing references to main thread objects on other threads is difficult and just leaking the objects is not the solution. We can however have a special method for these situations that catches coding failures in a safe manner, provided that behavior is not enforced on callers that want the runnable released. Having the Dispatch() methods not leak means we get the behavior one would expect, which is the behavior that it had until recently. No quirky Release() calls are required by the callers that can safely release their objects on any thread. > It is probably doable to return an already_AddRefed somehow, e.g. via > return value, or a pointer parameter. But it isn't really worth to do > so. Yes, not necessary because the caller has the option to hold a reference if any thread can release. > Wrapping the hack in a desent interface is the best way to go. I don't know what you have in mind here, but the point of this thread is that we don't have that now. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: nsThread now leaks runnables if dispatch fails
Xidorn Quan writes: > On Wed, Jan 6, 2016 at 3:53 AM, Kyle Hueywrote: >> On Mon, Jan 4, 2016 at 4:10 PM, Randell Jesup wrote: >>> >>> Yup. In cases where we anticipate a possible Dispatch failure (which is >>> supposed to become impossible, but isn't currently) you can use the >>> (still-existing) raw ptr interface and handle Dispatch failure >>> explicitly to release the (leaked) reference, if it's safe. Not all >>> cases are safe to release in that case (which is what drove the initial >>> bug filing, where it tried to release JS objects on Dispatch failure off >>> mainthread). Leaking is better than crashing/sec-bugs. >> >> No, you can't. If you can the raw ptr interface today Dispatch will >> create its own reference and pass that to the already_AddRefed version >> which then leaks it. There's no way for the caller to handle this >> safely. Again, as karlt points out, Dispatch leaks today even if the >> caller does everything correctly. > > You can keep a raw pointer yourself, and release it manually after you > find the dispatch fails, like what is done in > NS_DispatchToCurrentThread [1]. It is ugly, but it works and safe, > because Dispatch guarantees to leak when it returns failure code for > any async dispatch. > > [1] > https://dxr.mozilla.org/mozilla-central/rev/29258f59e5456a1a518ccce6b473b50c1173477e/xpcom/glue/nsThreadUtils.cpp#162-171 I think it would be much better to keep the sane ref-counting of the original implementation and add a different method for use when the thread is known to be in a state to accept new runnables but this method will assert and leak if the assumption was invalid. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: nsThread now leaks runnables if dispatch fails
Kyle Huey writes: > (This is a continuation of the discussion in bug 1218297) > > In bug 1155059 we made nsIEventTarget::Dispatch take an > already_AddRefed instead of a raw pointer. This was done to allow the > dispatcher to transfer its reference to the runnable to the thread the > runnable will run on. That solves a race condition we've had > historically where the destructor of a runnable can run on either the > dispatching thread or the executing thread, depending on whether the > executing thread can run the event to completion before the > dispatching thread destroys the nsCOMPtr on the stack. IIUC solving the race condition wasn't the goal of the change in API, but something that was done to retain existing workarounds for leaks. > In bug 1218297 we saw a case where dispatch to a thread (the socket > transport service thread in this case) fails because the thread has > already shut down. In our brave new world, nsThread simply leaks the > runnable. It did previously leak in some quirky situations where objects were intentionally created with no reference so as to leak. e.g. auto runnable = new FooRunnable(...); target->Dispatch(runnable, flags); Since https://hg.mozilla.org/mozilla-central/rev/2265e031ab97#l25.46 however, Dispatch() leaks even when the caller is doing ref-counting properly. > It can't release the reference it holds, because that would > reintroduce the race condition we wanted to avoid, and it can't > release the reference on the correct thread so it's already gone away. > But now we have a new source of intermittent leaks. > > Was this anticipated when designing bug 1155059? I don't think > leaking is acceptable here, so we may need to back that out and return > to living with that race condition. I agree this approach is not going to work out. Short term, I think the situation could be improved and most of those changes kept by adding a DispatchOrLeak() variant that can be used by the callers that want to leak. Then we still have the leaks we had prior to 2265e031ab97 but the new ones are resolved. For the remaining (old) leaks I can think of two options: 1) Never call Dispatch() when it might fail and assert that it does not. 2) Keep and additional reference to the runnable in the caller if it doesn't want Dispatch() to release the last reference. With the former, the caller needs to ensure the thread lives long enough for the Dispatch() to succeed, or that clean up happens before the thread is closed. With the latter, the caller needs to find another way to release when Dispatch() fails. Making Dispatch() infallible would permit only option 1. Keeping Dispatch() fallible allows the caller to use either option. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Too many oranges!
Kartikaya Gupta writes: > Personally I much prefer the new approach to reporting intermittents. > It's much easier for me to see at a glance (i.e when the bugs are > updated with the weekly count) which ones are actually occurring more > frequently and which bug would be best to spend time on. With the old > system I just got so much intermittent-failure bugmail that I would > just ignore it all. Yes, periodic summaries would be much better than the previous approach, but not all occurrences are reported. Sometimes a search on http://brasstacks.mozilla.com/orangefactor/ is required. The "Bug Details" search is much faster than the search on the front page, but it would be more helpful if all occurrences trigger some sort of periodic report, to indicate whether a search would find anything. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Finding out if the main thread is currently animating
David Rajchenbach-Teller writes: > To improve the Performance Stats API, I'm looking for a way to find out > if we are currently animating something on the main thread. > > My definition of animating is pretty large, i.e. "will the user probably > notice if some computation on the main thread lasts more than 32ms". > > Do we have a reliable way to do that? I would guess that our main thread animations all run off a single 60 Hz throttle, which I assume is the RefreshDriver. If that throttle has no jobs scheduled to run, then I expect there are no animations in progress. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Decommissioning "dumbmake"
On Fri, 16 Oct 2015 10:31:43 -0700, Gregory Szorc wrote: > But as awesome as > these targets are, they can still build more than is desired (especially in > the edit .h file case). This slows down iteration cycles and slows down > developers. > > For this reason, I think dumbmake needs to remain or be replaced by > something that allows Gecko developers to easily perform a partial (and > probably incorrect end state) build. I /think/ this could be as simple as a > "xul" target that linked libxul. This way, people could type `mach build > dom/base xul` and build all binaries (including static libraries) under a > source directory and link libxul. Are you suggesting that would do something different to this? ./mach build dom/base toolkit/library ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: large memory allocations / resource consumption in crashtests?
On Mon, 27 Jul 2015 21:35:20 +1200, Karl Tomlinson wrote: Sometimes it would be nice to check in crashtests that use, or attempt to use large memory allocations, but I'm concerned that checking in these crashtests could disrupt subsequent tests because there is then not enough memory to test what they want to test. Following up here, mainly just to report back on findings, for anyone considering the same in the future. I ended up backing out the test that I wanted to add [1] and I think the main blocker is that at least some platforms overcommit memory. Large allocations can succeed, but the lack of memory is not detected until the memory is actually used. When the memory is used, a process is killed, and AFAIK there is no guarantee that it is the process using the new memory. Even if we configured platforms to not overcommit, I'd still be uncomfortable adding a test that may force everything to swap to disk. It seems that fallible memory allocation is only a complete solution when constrained by address space, but perhaps we could run our tests with process data segment size constraints tuned for the hardware environment. The dummy unload event listener is helpful to avoid keeping things alive in BF cache, but AFAIK SimpleTest.forceGC/CC() (or similar DOMWindowUtils methods) and carefully removing all references is the only way to ensure that memory is released immediately. [1] https://bugzilla.mozilla.org/show_bug.cgi?id=999376#c12 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Use of 'auto'
Jonas Sicking writes: On Sun, Aug 2, 2015 at 3:47 AM, Xidorn Quan quanxunz...@gmail.com wrote: Probably we should generally avoid using constructor directly for those cases. Instead, use helper functions like MakeUnique() or MakeAndAddRef(), which is much safer. We used to have NS_NewFoo() objects all over the codebase. A lot of those were removed as part of deCOMtamination efforts. Personally I think being able to directly call the constructor has made for much easier to read code. Definitely worth the risk of refcount errors. Definitely easier to read than nsresult NS_NewFoo(getter_AddRefs(foo)) and perhaps arguably easier to read than already_AddRefedT MakeAndAddRefFoo() but I don't see much advantage of public constructors over something like static already_AddRefedFoo Foo::Create() or static nsRefPtrFoo Foo::Make() Constructors also have the potentially awkward feature of never failing, and hence the abundance of separate Init() methods. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Allowing web apps to delay layout/rendering on startup
James Burke writes: On Thu, Jul 30, 2015 at 1:28 PM, Jeff Muizelaar jmuizel...@mozilla.com wrote: Can't you just make everything display:none until you're ready to show it? Just using display: none seems like it will run into the same problem that prompted bug 863499, where the browser did some render/paints of a white page, which took time away from the async work completing. So maybe I should not frame it as just pausing layout? I was hoping it would also delay render and maybe paints that happen during startup, so more time is given up front to the async activities. If the window is shown, then something will be rendered. Perhaps you want to delay showing the window? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: large memory allocations / resource consumption in crashtests?
Thanks everyone. That gives me some ideas on how to clean up after the page. I figure that then any OOM issues will at least be in the vicinity of the test doing the large allocation. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
large memory allocations / resource consumption in crashtests?
Sometimes it would be nice to check in crashtests that use, or attempt to use large memory allocations, but I'm concerned that checking in these crashtests could disrupt subsequent tests because there is then not enough memory to test what they want to test. Is anything done between crashtests to clean up memory use? Or can CC be triggered to run during or after a particular crashtest? Do crashtests go into B/F cache on completion, or can we know that nsGlobalWindow::CleanUp() or FreeInnerObjects() will run on completion? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++
Tom Tromey writes: It was mentioned elsewhere in this thread that some code assigns to arguments. The style guide should clarify that parameters named aFoo should not be assigned to. Otherwise that defeats the purpose. Non-const references are the exception. If these are really needed, then something in the name would be helpful. Something specific to non-const references would be nice, but 'a' is better than nothing. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++
Bobby Holley writes: On Wed, Jul 8, 2015 at 4:45 PM, Karl Tomlinson mozn...@karlt.net wrote: I think we could relax the 'a' prefix requirement to be a convention used when identifying the variable as a parameter is useful. My opinion is that this is useful for most parameters in all non-trivial functions, but others disagree. I don't think that helps. One of the primary benefits of style rules is that they eliminate (or rather, reduce) the uncertainty and churn that comes from different reviewers having different preferences. Suppose I add a new method signature in a patch. If Alice finds a-prefixing useful and Bob does not, my likelihood of receiving a review nit depends on who's reviewing it - i.e. per-reviewer style guides, which are strictly worse than per-module style guides, which are strictly worse than a single style guide. Yes, I think you are right. That would be a likely unfortunate consequence if there is not a simple-enough set of rules to distinguish. A complicated set of rules is more trouble than it is worth to save a few letters IMO. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++
Bobby Holley writes: On Wed, Jul 8, 2015 at 3:52 AM, Gabor Krizsanits gkrizsan...@mozilla.com wrote: The priority is to automatically rewrite our source with a unified style. foo - aFoo is reasonably safe, whereas aFoo-foo is not, at least with the current tools. So we either need to combine the rewrite tools with static analysis, or just go with aFoo. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform +1 for consistency. At the risk of mostly repeating myself, I want to emphasize once more that: (1) We have already made the decision (in other threads) that consistency trumps everything else. If I hear correctly, it sounds like Jeff is arguing that it is not always helpful to know that a variable is a parameter, and so it shouldn't be a requirement that parameters are always named to indicate that they are a parameter. If I were to always use firstObject for the name of the first object of immediate relevance, then this would be consistent naming. If there is only one object, then firstObject would still be a consistent name, but I wouldn't say that using the name object is inconsistent. When it is helpful to know that a variable is a parameter then consistency in the convention to identify parameters is helpful. If it is not necessary to indicate that the variable is a parameter, then it doesn't matter. If we choose the conversion to remove existing 'a' prefixes, then we would need to either automatically move to a different naming convention that clearly identifies the purposes of the variables (out params as non-const references, and maybe pointers may be specially indicated) or manually go through and work out what names to give the variables that would benefit from clear identification. The scope of the manual process of course makes this impractical. I think we could relax the 'a' prefix requirement to be a convention used when identifying the variable as a parameter is useful. My opinion is that this is useful for most parameters in all non-trivial functions, but others disagree. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++
Jeff Gilbert writes: On Tue, Jul 7, 2015 at 5:41 PM, Karl Tomlinson mozn...@karlt.net wrote: Some people find the prefix helps readability, because it makes extra information immediately available in the code being examined, while you are indicating that this is a significant burden on readability. Can you explain why the extra letter is a significant burden? Because extra noise is being forced into variable names for minimal benefit. Every declaration is a sea of extra 'a's. Refactoring code means doing a lot of s/aFoo/foo/ and vice-versa. Reading each arg name requires first passing over 'a' before getting to anything relevant. Often this means that short function bodies can have every fifth or sixth letter being 'a'. I wouldn't see a problem with removing the 'a' prefix from parameter names in declarations and inline methods. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++
Jeff Gilbert writes: It can be a burden on the hundreds of devs who have to read and understand the code in order to write more code. Some people find the prefix helps readability, because it makes extra information immediately available in the code being examined, while you are indicating that this is a significant burden on readability. Can you explain why the extra letter is a significant burden? If the 'a' prefix is a burden then the 'm' prefix must be also, and so we should be using this-member instead of mMember. The opinions of a few over-harried reviewers should not hold undue sway over the many many devs writing code. unless people want code to be reviewed. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++
Jeff Gilbert writes: I work with a number of these, but after a page or two, why is it at all relevant which vars were args? For information flow? Should we mark locals that purely derive from args as `aFoo` as well? Long functions (which have poor readability anyway) generally have so much going on that the trivia of which vars are args does not seem very useful.. I do not see how `aFoo` helps here, so please expand on this. A simple variable name, such as font for example, may identify any of a number of fonts. Such simple names, without any qualifier in the name, are often used in loops, for example, because it is the most important font in the immediate context. However a simple variable may also be used in a parameter list because when looking at the parameter list it is obvious which font is relevant in the interface. That means that if font is seen in the body of a function, the first question that arises is which font? If the variable is called aFont then we know which font because we know what function we are in. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Shutdown hangs are very common
Vladan D. writes: Should fixing shutdown hangs be higher priority? _exit() after profile-before-change notification would be the many-holes-with-one-plug bug to prioritize. https://wiki.mozilla.org/XPCOM_Shutdown https://bugzilla.mozilla.org/show_bug.cgi?id=662444 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Voting in BMO
I would like to vote for voting. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: what is new in talos, what is coming up
William Lachance writes: Hi Karl, On 2015-06-04 12:30 AM, Karl Tomlinson wrote: jma...@mozilla.com writes: We will deprecate those instances of compare-talos next quarter completely. The treeherder version seems to randomly choose which and how many of the results to load and so the comparison changes after reloads of the page. So this is a bug. :) Could you please file something here: https://bugzilla.mozilla.org/enter_bug.cgi?product=Tree%20Managementcomponent=Perfherder For best results, include reproduction steps and comparison with the existing snarkfest interface. Thanks for the link. Knowing the right component is half the process of filing bugs. https://bugzilla.mozilla.org/show_bug.cgi?id=1171707 I don't know how to compare with snarkfest, but comparing treeherder with treeherder is sufficient to observe the bug. can we keep the snarkfest version running please until this is resolved? My main concern was because I inferred from the previous post that deprecation of snarkfest was scheduled on a timeline basis. Can we instead schedule on a when-its-ready basis, please? If I may sneak in a request or two, then a number of results or an estimate of standard error in the mean would be helpful to know how to interpret the standard deviations. Also, a way to see whether higher or lower is better even for those tests without enough data to detect a statistically significant change would be a bonus. https://bugzilla.mozilla.org/show_bug.cgi?id=1171694 https://bugzilla.mozilla.org/show_bug.cgi?id=1171703 I'm keen to find out what is in the Details links, but they currently just ask me to wait a minute. Likewise, that sounds like a bug. Clicking on details is supposed to open up a subtest summary. Here it seems to work fine for me: https://treeherder.mozilla.org/perf.html#/compare?originalProject=mozilla-centraloriginalRevision=b2ad2f9f8e53newProject=mozilla-centralnewRevision=f21fac50a8cc That link is loading for me, and that info can be very helpful, thanks. However, others links are not. https://bugzilla.mozilla.org/show_bug.cgi?id=1171710 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Extra commit metadata on hg.mozilla.org
Gregory Szorc writes: hg.mozilla.org now displays extra metadata on changeset pages. e.g. https://hg.mozilla.org/mozilla-central/rev/dc4023d54436. Read more at http://gregoryszorc.com/blog/2015/06/04/changeset-metadata-on-hg.mozilla.org/ Thank you, Gregory. I'm sure that will be *very* useful. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: The War on Warnings
Nicholas Nethercote writes: Do warnings (as opposed to NS_ASSERTION) do anything in tests? I don't think they do. If that's right, a warning is only useful if a human looks at it and acts on it, and that's clearly not happening for a lot of these. Warnings in tests don't do anything but log that a problem has occurred. But, when the warnings are not noisy, this is often useful for tracking the cause of intermittent failures. Similarly, warnings about real problems are helpful when running debug builds, but we are being spammed with so many warnings that it is hard to imagine they can all be real problems. If they are, then we are doing a poor job of keeping up with them. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: what is new in talos, what is coming up
Thanks, Joel. I've benefited from being able to use perf.html#/comparechooser and will look forward to the performance discussion. jma...@mozilla.com writes: 2) compare-talos is in perfherder (https://treeherder.mozilla.org/perf.html#/comparechooser), other instances of compare-talos have a warning message at the top indicating you should use perfherder. We will deprecate those instances of compare-talos next quarter completely. This is very helpful to present PGO separated and to know that higher is better for canvasmark, but it is not yet ready to replace http://perf.snarkfest.net/compare-talos/index.html The treeherder version seems to randomly choose which and how many of the results to load and so the comparison changes after reloads of the page. upcoming work: 2) continue polishing perfherder graphs, compare-view Perhaps the above issue is already in this work and you know it will be addressed by next quarter, but, if not, can we keep the snarkfest version running please until this is resolved? Today, the treeherder version is not loading enough results to do a reasonable comparison, while the snarkfest version doesn't seem to have the problem and presents results almost instantly. If I may sneak in a request or two, then a number of results or an estimate of standard error in the mean would be helpful to know how to interpret the standard deviations. Also, a way to see whether higher or lower is better even for those tests without enough data to detect a statistically significant change would be a bonus. I'm keen to find out what is in the Details links, but they currently just ask me to wait a minute. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: The War on Warnings
Martin Thomson writes: I guess that most of these are as a result of actual problems, even if they are minor. The ones that are actual problems would be the ones that are harder to resolve. In my experience, however, when I've seen many of one kind of warning, investigation has revealed that the problem is that a warning is presented when there is nothing wrong. Often this is because someone has carelessly used NS_ENSURE_something. Perhaps that is because the macros names don't say what they do. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: document.execCommand(cut/copy)
On 2015-05-23 5:02 AM, Jesper Kristensen wrote: It would be nice of you could also support paste. Ehsan Akhgari writes: Handling paste is a difficult topic, and I definitely don't have a good answer yet. Prompting for paste has two issues: 2. The synchronous nature of the execCommand API mandates a modal prompt, which is terrible for user experience, so we would probably need some kind of an out of band permission request. A synchronous API would be a problem for pasting even without the prompt, unless perhaps pasting data only from within the same process. Fetching the clipboard from another application, or process, is an intrinsically asynchronous operation. Attempts have been made, with varying degrees of success, to dress up the asynchronous fetch as a synchronous operation. DataTransfer usage may have the luxury of being able to pre-fetch the data in all formats before dispatching the event referencing the DataTransfer, which despite the inefficiencies can provide a synchronous API. I'm not familiar with execCommand, but sounds like even that would not be possible. We shouldn't expose any more asynchronous operations in synchronous APIs. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: PSA: New C++11 features made available by dropping support for gcc-4.6 in Gecko 38
Ehsan Akhgari writes: On Monday, May 11, 2015, Xidorn Quan quanxunz...@gmail.com wrote: On Tue, May 12, 2015 at 7:29 AM, Ehsan Akhgari ehsan.akhg...@gmail.com javascript:_e(%7B%7D,'cvml','ehsan.akhg...@gmail.com'); wrote: On 2015-04-30 7:57 AM, Xidorn Quan wrote: I guess we probably should forbid using any expression with side effect for member initializers. Hmm, why should we restrict them more than what can appear in the constructor initializer list Because it seems to me that, the member initializers are more obscure for their initializing order. Given they can be disperced widely in different places, it would be far harder to reason out the correctness from a piece of code if the initializers could have side effect. That's a good point! But I think expressions with side effects doesn't really narrow down the issue. It's more an expression that can affect what other expressions in the initializer list get evaluated to. And even that issue is not specific to C++11 member initializers, the old school ones are similarly affected (and in fact the initialization order is exactly the same, based on the order of declarations, bases first.) but at least with those you will get a compiler warning if you mistype the initialization order in the body of the constructor... In short, I still don't see an issue that is specific to C++11 member initializers. I think it is not so much the order or initialization or dependencies on other members that is different here but the fact that member initializers may be dispersed in different parts of the code, well away from the constructors, where one would usually look for initialization side effects. I assume one could argue that the constructors of member variables may have side effects and that code is not near the owning classes constructor. However, member variable constructors do not have access to private data or methods in the owning class. I assume member initializers do have this access. I expect things will be clearest if member initializers are used only for effects on the associated member. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used
Ehsan Akhgari writes: On 2015-05-07 5:53 PM, Karl Tomlinson wrote: Ehsan Akhgari writes: This seems similar to the compiler warning situation. Usually at least, I don't think we should automatically modify the code in line with how the compiler reads the code just to silence the warning. Instead the warning is there to indicate that a programmer needs to check the code. These cases bear no similarity whatsoever. I can't think of any compiler warnings that can be automatically fixed without changing the meaning of the program. The warning that you are proposing to fix here is -Woverloaded-virtual. No. My proposal is completely unrelated to function overloading. Perhaps you're thinking of clang's -Winconsistent-missing-override? This proposal obviously fixes that warning (as a mere side effect), but that warning wouldn't catch everything that this proposal covers. Ah, yes. Sorry to confusing things. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used
Ehsan Akhgari writes: This seems similar to the compiler warning situation. Usually at least, I don't think we should automatically modify the code in line with how the compiler reads the code just to silence the warning. Instead the warning is there to indicate that a programmer needs to check the code. These cases bear no similarity whatsoever. I can't think of any compiler warnings that can be automatically fixed without changing the meaning of the program. The warning that you are proposing to fix here is -Woverloaded-virtual. At least once we can build with this warning enabled, I recommend making this warning fatal instead of covering over it by adding an override annotation that the author may have never intended. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used
Ehsan Akhgari writes: On 2015-04-27 9:54 PM, Trevor Saunders wrote: On Mon, Apr 27, 2015 at 09:07:51PM -0400, Ehsan Akhgari wrote: On Mon, Apr 27, 2015 at 5:45 PM, Trevor Saunders tbsau...@tbsaunde.org wrote: I believe we have some cases in the tree where a virtual function doesn't override but is final so you need to write virtual final. I believe one of those cases may be so a function can be called indirectly from outside libxul, and another might be to prevent people using some cc macros incorrectly. Any chance you could point us to those functions, please? http://mxr.mozilla.org/mozilla-central/source/xpcom/glue/nsCycleCollectionParticipant.h#548 Hmm, we can probably make an exception for this one. (I assume below that you are proposing handling these through method-specific exceptions.) and this one isn't final, but we could make it final if the tu will be built into libxul (because then devirtualization is fine) http://mxr.mozilla.org/mozilla-central/source/dom/base/nsIDocument.h#1287 I'm not sure why that function is virtual. If it's just in order to make it possible to call it through a vtable from outside of libxul, I'm not sure why we need to treat it differently than other XPCOM APIs for example. If this is not used outside of libxul, we can probably make it non-virtual. XPCOM APIs are usually abstract interfaces and so not final. final could be removed and have valid code, but may be a useful optimization or enforcement of intent. virtual final seems necessary for some use cases, and there is no redundancy. Is there a need to ban this usage? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used
Ehsan Akhgari writes: I think there's a typo of some sort in the question, but if you meant every overriding function must be marked with override, then yes, that is the change I'm proposing, but the good news is that you can now run clang-tidy on the entire tree and get it to rewrite the source code to make sure that the override keywords are present where they are needed, and we can do that as often as we would like. IOW, this can be done without requiring every C++ programmer to remember to do it always. I fear that an automatic update would be more than just enforcing a style because override keywords imply programmer intent. If the proposal is to periodically automatically add override keywords where methods override but are currently not annotated as such, then it seems we should we have an annotation to indicate that a method is not intended to override. However, that would require annotating all methods. This seems similar to the compiler warning situation. Usually at least, I don't think we should automatically modify the code in line with how the compiler reads the code just to silence the warning. Instead the warning is there to indicate that a programmer needs to check the code. Perhaps though there is a case for a one-off change to add override automatically so that warnings can be enabled on new code. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Can we make try builds default to a different profile than Nightly/Aurora/Beta/Release builds?
On Wed, 8 Apr 2015 20:40:12 -0700, Nicholas Alexander wrote: On Wed, Apr 8, 2015 at 4:06 PM, Mike Hommey m...@glandium.org wrote: If running nightly screws up profiles for older versions, that's a serious problem imho. Really? Presumably not every forward DB migration can be reverted without some data loss in theory, and and in my limited practice, handling DB downgrades is pretty damn hard. Is there an expectation that Release - Nightly - Release will work? Preserve data? The fact the Nightly and Release and ESR default to the same profile implies that it is expected that this will work. Either channels should default to different profiles or this should work. We need to choose an approach rather than pretending the problem is with the user. Downgrades don't need to work perfectly. I guess it's OK if running a build with an older db format uses the old db and doesn't have any of the changes made by a build with the newer format, but then there is still the issue of whether and how to merge changes to the old db since the new db was created when switching back to the new db again. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: SpiderMonkey and XPConnect style changing from |T *p| to |T* p|
Bobby Holley writes: On Fri, Mar 27, 2015 at 2:04 PM, Mats Palmgren m...@mozilla.com wrote: So let's change the project-wide coding rules instead to allow 99 columns as the hard limit, but keep 80 columns as the recommended (soft) limit. I think we should avoid opening up a can of worms on the merits of different styles, and instead focus on the most pragmatic ways to unify Gecko and JS style. Under that framework, Mats' proposal makes a lot of sense. ... in that it tolerates multiple different styles. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests
On Fri, 19 Dec 2014 18:58:53 -0500, Ehsan Akhgari wrote: On 2014-12-19 4:40 PM, Nils Ohlmeier wrote: On Dec 19, 2014, at 6:56 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote: Logging sufficiently is almost always enough to not have to use these timers, as those tests have demonstrated in practice. I disagree. Especially as the logging of mochitest is really poor. - the timestamps of log messages are lost as described in bug 1020516 - SimpleTest only has one log level: info(). Which I think is the main reason the log files get big, because test writers can’t fine tune the logging. - Log messages from mochitest are not printed together with the stdout/stderr of the browser (so correlating these two is almost impossible) - Logging behaves differently on your local machine then on the build servers Some of these (such as the latter two) seem to be issues that you'd also have with your own custom logging in mochitest, so I'm not sure how the timer based logging is not affected by them. I assume multiple custom timers can provide different output that can support a diagnosis without timestamps. BTW I understand that WebRTC has pretty special and probably unique requirements, compared to what the rest of the mochitest users do with it. FWIW, these issue plague test result analysis in general, not just rtc. But thinking of this more, do you find it useful to be able to register a callback with the harness to be called when the harness decides to kill your test because it has timed out? (Obviously the callback would be informational, and would not be able to prevent the harness from killing the test and moving on.) That might be useful, but perhaps it would not be necessary if logging worked. Perhaps it is better to have an error handler that only runs when there are errors, reducing spam, but that means keeping disjoint parts of the test in sync perhaps putting more state in global scope, while logging is simple to maintain. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Getting rid of already_AddRefed?
On Tue, Dec 23, 2014 at 1:21 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote: Are there good use cases for having functions accept an nsRefPtrT? If not, we can outlaw them. Aryeh Gregor writes: Do we have a better convention for an in/out parameter that's a pointer to a refcounted class? editor uses this convention in a number of functions for pass me a node/pointer pair as input, and I'll update it to the new value while I'm at it. If there's a good substitute for this, let me know and I'll use it in the future when cleaning up editor code. nsRefPtrT* ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unified compilation is going to ride the train
On Fri, 28 Nov 2014 00:46:07 -0800, L. David Baron wrote: On Friday 2014-11-28 10:12 +0900, Mike Hommey wrote: The downside from doing so, though, is that non-unified build *will* be broken, and code purity (right includes in the right sources, mostly) won't be ensured. Do you think this is important enough to keep non-unified builds around? Another disadvantage here is that it will make adding or removing source files harder, because you'll have to clean up the accumulated nonunified bustage that shows up when files are shifted around between unified files. (This might be somewhat harder to fix a year or two later than it is when causing it.) Would it be helpful to be more explicit about the special treatment of include files in unified sources? Perhaps have no include directives in UNIFIED_SOURCES files, but have a separate .h file at the beginning of the unified file (explicitly listed or automatically added by the unification) that includes all necessary headers for all UNIFIED_SOURCES. I guess it is less likely that include lists would be cleaned up if they are in separate files, which would bring us back to IWYU. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] We deserve better than runtime warnings
L. David Baron writes: On 20/11/14 17:56, Boris Zbarsky wrote: Ah, we can't. We can whitelist the number of assertions in a mochitest (or a number range if the number is not quite stable), but not the text of the assertion. On Thursday 2014-11-20 18:05 +0100, David Rajchenbach-Teller wrote: I believe that we can provide something less fragile than that. It's not all that fragile, since most tests don't have known assertions, so in that vast majority of tests, we have test coverage of regressions of assertion counts. This covers reftests, crashtests, and all mochitests except for mochitest-browser-chrome (bug 847275 covers doing it there). runcppunittests.py, runxpcshelltests.py, and rungtests.py use XPCOM_DEBUG_BREAK=stack-and-abort so we should have coverage there too. When web-platform-tests run with debug builds, we can do the same there because known crashes can be annotated in those tests. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Removing unused Perl scripts from the tree
Nicholas Nethercote writes: UNSURE -- ./layout/mathml/updateOperatorDictionary.pl - appears to be in fairly recent use This was used to generate an in-tree file from an external spec. It is reasonably likely that there will be future changes to the spec, in which case the script will again be useful. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: treating B2G device tests as tier 1
Jonas Sicking writes: But any type of regression is cause for backout. While I agree regressions are bad, this isn't the usual process. If it were, then I wouldn't bother filing bugs, but merely back out the offending change. There is some kind test for whether the regression costs more than the improvements made, but it comes down to a judgement call from the module owner AIUI. Regressions that sit in the tree make it dramatically much harder to write and test other patches. It's generally much better to back the offending patch out to allow everyone else to go at full speed. Perhaps it is, but this would be quite a change in process. Some kind of policy or guidelines would be helpful, or it could well get out of control. Backouts usually cause regressions too. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Getting rid of already_AddRefed?
Aryeh Gregor writes: The compiler is required to use the move constructor (if one exists) instead of the copy constructor when constructing the return value of a function, and also when initializing an object from the return value of a function, or assigning the return value of a function. So if you have Foo GetFoo() { Foo f(1, 2, 7); /* do lots of stuff to f */ return f; } void MyFunction() { Foo f = GetFoo(); } the copy constructor of Foo will not get called anywhere. I guess that means that with struct Bar { Bar(Foo f) : mF(f) {} Foo GetFoo() { return mF; } Foo mF; } GetFoo() would give away what mF owns? If so, can we require that be more explicit somehow? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Depending on libnotify
Philip Chee writes: On 02/07/2014 18:13, David Rajchenbach-Teller wrote: We had libnotify support but this was removed from the tree on the grounds that it didn't suit our needs. I don't think those were the grounds. AIUI it was just that people didn't want to put effort into a lower priority platform, so it was easier to remove existing support. But libnotify provides something quite different from the layer on inotify that David wants here. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates
Thanks for the overview of a real problem, Andrew. (I recall having to add an exception for a Mozilla Root CA to access email at one time.) Andrew Sutherland writes: I propose that we use a certificate-observatory-style mechanism to corroborate any invalid certificates by attempting the connection from 1 or more trusted servers whose identity can be authenticated using the existing CA infrastructure. Although this can identify a MITM between the mail client and the internet, I assume it won't identify one between the mail server and the internet. *** it looks like you are behind a corporate firewall that MITMs you, you should add the firewall's CA to your device. Send the user to a support page to help walk them through these steps if that seems right. *** it looks like the user is under attack I wonder how to distinguish these two situations and whether they really should be distinguished. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Gecko style: Formatting function return type and specifiers
Birunthan Mohanathas writes: For top-level function definitions, the recommended style is: templatetypename T static inline T Foo() { // ... } The main reasons for having the function name at the start of a new line, I assume, was to help some tools (including diff) that look there for function names. However, for function declarations and inline member functions, there does not seem to be a definitive style. For indented declarations within a class, AFAIK the only reason for a line break would be if there is a limit on the number of characters in a line. (diff and other tools are not going to find the function name anyway.) Then it comes down to judgement re whether to split the line before the function name, or between parameters, etc. If there are no parameters, then there is usually no need to break the line. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Policy for disabling tests which run on TBPL
Thank you for putting this together. It is important. jmaher writes: This policy will define an escalation path for when a single test case is identified to be leaking or failing and is causing enough disruption on the trees. Exceptions: 1) If this test has landed (or been modified) in the last 48 hours, we will most likely back out the patch with the test 2) If a test is failing at least 30% of the time, we will file a bug and disable the test first I have adjusted the above policy to mention backing out new tests which are not stable, working to identify a regression in the code or tests, I see the exception for regressions from test changes, but I didn't notice mention of regressions from code. If a test has started failing intermittently and is failing at least 30% of the time, then I expect it is not difficult to identify the trigger for the regression. It is much more difficult to identify an increase in frequency of failure. Could we make it clear that preferred solution is to back out the cause of the regression? Either something in the opening paragraph or perhaps add the exception: If a regressing push or series of pushes can be identified then the changesets in those pushes are reverted. We won't always have a single push because other errors preventing tests from running are often not backed out until after several subsequent pushes. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Recommendations on source control and code review
On Fri, 11 Apr 2014 13:29:01 -0700, Gregory Szorc wrote: https://secure.phabricator.com/book/phabflavor/article/writing_reviewable_code I would be thrilled if we started adopting some of the recommendations such as more descriptive commit messages and many, smaller commits over fewer, complex commits. Thank you very much, Gregory. The Many Small Commits explains things well, and the nearest thing I found in our developer docs was merely a suggestion, so I've made our recommendations stronger [1] and linked to this article. [1] https://developer.mozilla.org/en-US/docs/Developer_Guide/How_to_Submit_a_Patch$compare?to=549939from=468795 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform