Intent to ship: CSS properties text-decoration-skip-ink, text-decoration-thickness, and text-underline-offset

2019-09-12 Thread Daniel Holbert
As of today (Sept 12 2019), I've turned on support for the CSS properties
text-decoration-skip-ink, text-decoration-thickness, and
text-underline-offset, on all platforms.

These features have been developed behind the preferences
"layout.css.text-underline-offset.enabled",
"layout.css.text-decoration-thickness.enabled", and
"layout.css.text-decoration-skip-ink.enabled".

Other UAs shipping these features:
* Safari 12.1 (and newer) supports all these features
* Chrome/Blink supports text-decoration-skip-ink, and they have
https://bugs.chromium.org/p/chromium/issues/detail?id=785230 open on the
other properties.

Bug to enable by default:
https://bugzilla.mozilla.org/show_bug.cgi?id=1573631

Note that these properties were already enabled by default in Nightly and
early-beta, as of Firefox 70. I intend to uplift bug 1573631's patch to
beta, to remove that restriction, as part of the Firefox 70 beta cycle, so
that this will ship in Firefox 70.

These features were previously discussed in these intent-to-prototype
threads:
https://groups.google.com/d/msg/mozilla.dev.platform/Xsts-2ORpRY/j96vHsIRAAAJ
https://groups.google.com/d/msg/mozilla.dev.platform/VWcV5QUzJF0/O9hET80TAgAJ
https://groups.google.com/d/msg/mozilla.dev.platform/iwkhLi-2mxw/2aJPi80TAgAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Prototype: text-decoration-skip-ink

2019-08-07 Thread Daniel Holbert
On Wednesday, August 7, 2019 at 12:18:16 PM UTC-7, Tom Schuster wrote:
> Very cool, I have been looking forward to this. I hope we can enable
> this for links by default at some point, like Chrome.

That's the plan!

The new property here, "text-decoration-skip-ink", has the unusual distinction 
of being a CSS feature where the property's *default value* ("auto") is the 
value that triggers the new behavior.

So once we pref on support for the property, then the default behavior for all 
text-decorations will change -- though sites can request the old behavior by 
specifying "text-decoration-skip-ink: none" (which turns on the 
"legacy"/current behavior).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Containment

2019-05-27 Thread Daniel Holbert
Followup: it turns out this wasn't quite ready to be enabled for 68, so I'm
now targeting Firefox 69 as the release where CSS Containment will be
default-enabled in release.

On Mon, Mar 18, 2019 at 12:01 PM Daniel Holbert 
wrote:

> As of today (March 18th 2019), I intend to turn CSS Containment
> <https://drafts.csswg.org/css-contain/> on by default on all platforms,
> in Firefox Nightly 68. It has been developed behind the
> 'layout.css.contain.enabled' preference.
>
> Other UAs shipping this or intending to ship it are:
> * Chrome & other Blink-based UAs (Chrome has supported since Chrome 52,
> released in 2016, according to caniuse
> <https://caniuse.com/#search=contain>).
>
> Bug to turn on by default:
> * https://bugzilla.mozilla.org/show_bug.cgi?id=1532471 is where I'll be
> turning it on in Nightly by default (with a guard to only let it ship as
> far as early-beta)
> * https://bugzilla.mozilla.org/show_bug.cgi?id=1487493 is where I'll be
> removing the guard & letting it ride the trains to release, assuming
> everything goes well. (I anticipate that this will happen for the Firefox
> 68 release cycle.)
>
> This feature was previously discussed in this "intent to implement"
> thread:
> https://groups.google.com/d/msg/mozilla.dev.platform/q-uXVVClcU4/WswhXtlWqFIJ
> (though the spec and the feature have evolved a good deal since then)
>
> Note: we don't currently intend to ship support for one of the keywords
> mentioned in the spec & which Chrome sort-of supports -- "contain:style".
> Chrome doesn't implement this keyword properly/robustly, and we have doubts
> about whether the complexity & maintenance burden of a correct
> implementation would be justified by this keyword's limited use cases. See
> https://github.com/w3c/csswg-drafts/issues/3280 for more on this. The CSS
> Working group has marked this keyword as "at-risk", as a result of the
> discussion around these concerns.
>
> Many thanks to former-interns Morgan Reschenberg, Yusuf Sermet, and Kyle
> Zentner for their hard work on this feature!
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Containment

2019-03-19 Thread Daniel Holbert
On Tue, Mar 19, 2019 at 2:13 AM Xidorn Quan  wrote:

> On Tue, Mar 19, 2019, at 6:01 AM, Daniel Holbert wrote:
> > As of today (March 18th 2019), I intend to turn CSS Containment
> > <https://drafts.csswg.org/css-contain/> on by default on all platforms,
> in
> > Firefox Nightly 68. It has been developed behind the
> > 'layout.css.contain.enabled' preference.
>
> IIRC this feature is designed to allow authors to provide hints to UAs for
> some optimization opportunities in the rendering pipeline.
>

Right, yeah - that's the primary intent.

Does our implementation also include any such optimizations for faster
> rendering, or is it currently just a basic implementation for conformance?
>

It does! The main one is
https://bugzilla.mozilla.org/show_bug.cgi?id=1497414 (making
`contain:layout size` elements reflow roots.

If we already include optimizations based on the hints, do we know how much
> they help the performance?
>

It entirely depends on the use-case, and of course it depends on web
authors opting in to use it.

The best results will come from adding containment around rapidly-changing
(i.e. frequently-reflowing) content, in a page with other
expensive-to-reflow content (which we can then ignore during the frequent
reflows, if we make the rapidly-changing area into a reflow root).  As one
example, we tried adding "contain:layout style" to the Web Console
text-entry area in https://bugzilla.mozilla.org/show_bug.cgi?id=1502524 ,
and this gave a ~50% reduction in time spent in reflow for a particular
stress test. However, we had to back it out in that case, because the Web
Console's functionality depends on some layout information *not* being
contained.  But that's an example of the sort of speedup that this can
give, in cases where it *is* appropriate to use.

- Xidorn
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Containment

2019-03-18 Thread Daniel Holbert
On Mon, Mar 18, 2019 at 12:52 PM Daniel Holbert 
wrote:

> Summary:
>   CSS Containment gives web developers several ways of indicating that a
> subtree isn't influenced by the rest of the page. This may allow UAs to
> perform certain optimizations that they otherwise wouldn't be able to do.
>

Sorry, I slightly mis-stated that.  Closer to the truth:
"CSS Containment gives web developers several ways of indicating that a
subtree *does not influence the rest of the page*"

(The feature's design is mostly focused on preventing side effects from
*leaking out* of a subtree, rather than preventing outside things from
having side effects that leaking into a subtree.)


> Bug:
>   https://bugzilla.mozilla.org/show_bug.cgi?id=1150081
>
> Link to standard:
>   https://drafts.csswg.org/css-contain/
>
> Platform coverage:
>   All platforms
>
> Estimated or target release:
>   Firefox 68
>
> Preference behind which this will be implemented:
>   layout.css.contain.enabled (probably enabled by default in Nightly as of
> tomorrow, March 19)
>
> Is this feature enabled by default in sandboxed iframes? If not, is there
> a proposed sandbox flag to enable it? If allowed, does it preserve the
> current invariants in terms of what sandboxed iframes can do?
>   Enabled by default everyhwere. It has no impact on what sandboxed
> iframes can do -- it's purely a way of constraining sizing/painting
> behavior.
>
> DevTools bug:
>   None at the moment. This feature has subtle effects and I don't know of
> any useful devtools work to be done for it.
>
> Do other browser engines implement this?
>   Chrome (Blink) implements it as of version 52. Their
> intent-to-implement:
> https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/9W80Kw-z3ss
>
> web-platform-tests:
>  https://wpt.fyi/results/css/css-contain
>
> https://wpt.fyi/results/css/vendor-imports/mozilla/mozilla-central-reftests/contain
>
> Is this feature restricted to secure contexts?
>  No, not currently. Note that Chrome doesn't restrict it, so it could
> conceivably create interop issues if we restricted it without getting them
> to also commit to restricting it.
>
> On Mon, Mar 18, 2019 at 12:39 PM Daniel Holbert 
> wrote:
>
>> Yeah, sorry - our earlier intent-to-implement thread predated our current
>> boilerplate (which includes stuff like test coverage).  And for
>> intent-to-ship, our boilerplate text is pretty minimal.
>>
>> Answering your direct question: yes, there is good web platform test
>> coverage for this feature.  I'll post a followup with answers to our other
>> typical intent-to-implement fields, too.
>>
>> On Mon, Mar 18, 2019 at 12:07 PM James Graham 
>> wrote:
>>
>>> On 18/03/2019 19:01, Daniel Holbert wrote:
>>> > As of today (March 18th 2019), I intend to turn CSS Containment
>>> > <https://drafts.csswg.org/css-contain/> on by default on all
>>> platforms, in
>>> > Firefox Nightly 68. It has been developed behind the
>>> > 'layout.css.contain.enabled' preference.
>>>
>>> Apologies if I've missed it, but I can't see any mention of whether this
>>> feature has — meaningful — cross browser (i.e. wpt) tests in the ItI
>>> thread or here.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Containment

2019-03-18 Thread Daniel Holbert
Summary:
  CSS Containment gives web developers several ways of indicating that a
subtree isn't influenced by the rest of the page. This may allow UAs to
perform certain optimizations that they otherwise wouldn't be able to do.

Bug:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1150081

Link to standard:
  https://drafts.csswg.org/css-contain/

Platform coverage:
  All platforms

Estimated or target release:
  Firefox 68

Preference behind which this will be implemented:
  layout.css.contain.enabled (probably enabled by default in Nightly as of
tomorrow, March 19)

Is this feature enabled by default in sandboxed iframes? If not, is there a
proposed sandbox flag to enable it? If allowed, does it preserve the
current invariants in terms of what sandboxed iframes can do?
  Enabled by default everyhwere. It has no impact on what sandboxed iframes
can do -- it's purely a way of constraining sizing/painting behavior.

DevTools bug:
  None at the moment. This feature has subtle effects and I don't know of
any useful devtools work to be done for it.

Do other browser engines implement this?
  Chrome (Blink) implements it as of version 52. Their intent-to-implement:
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/9W80Kw-z3ss

web-platform-tests:
 https://wpt.fyi/results/css/css-contain

https://wpt.fyi/results/css/vendor-imports/mozilla/mozilla-central-reftests/contain

Is this feature restricted to secure contexts?
 No, not currently. Note that Chrome doesn't restrict it, so it could
conceivably create interop issues if we restricted it without getting them
to also commit to restricting it.

On Mon, Mar 18, 2019 at 12:39 PM Daniel Holbert 
wrote:

> Yeah, sorry - our earlier intent-to-implement thread predated our current
> boilerplate (which includes stuff like test coverage).  And for
> intent-to-ship, our boilerplate text is pretty minimal.
>
> Answering your direct question: yes, there is good web platform test
> coverage for this feature.  I'll post a followup with answers to our other
> typical intent-to-implement fields, too.
>
> On Mon, Mar 18, 2019 at 12:07 PM James Graham 
> wrote:
>
>> On 18/03/2019 19:01, Daniel Holbert wrote:
>> > As of today (March 18th 2019), I intend to turn CSS Containment
>> > <https://drafts.csswg.org/css-contain/> on by default on all
>> platforms, in
>> > Firefox Nightly 68. It has been developed behind the
>> > 'layout.css.contain.enabled' preference.
>>
>> Apologies if I've missed it, but I can't see any mention of whether this
>> feature has — meaningful — cross browser (i.e. wpt) tests in the ItI
>> thread or here.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Containment

2019-03-18 Thread Daniel Holbert
Yeah, sorry - our earlier intent-to-implement thread predated our current
boilerplate (which includes stuff like test coverage).  And for
intent-to-ship, our boilerplate text is pretty minimal.

Answering your direct question: yes, there is good web platform test
coverage for this feature.  I'll post a followup with answers to our other
typical intent-to-implement fields, too.

On Mon, Mar 18, 2019 at 12:07 PM James Graham 
wrote:

> On 18/03/2019 19:01, Daniel Holbert wrote:
> > As of today (March 18th 2019), I intend to turn CSS Containment
> > <https://drafts.csswg.org/css-contain/> on by default on all platforms,
> in
> > Firefox Nightly 68. It has been developed behind the
> > 'layout.css.contain.enabled' preference.
>
> Apologies if I've missed it, but I can't see any mention of whether this
> feature has — meaningful — cross browser (i.e. wpt) tests in the ItI
> thread or here.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: CSS Containment

2019-03-18 Thread Daniel Holbert
As of today (March 18th 2019), I intend to turn CSS Containment
 on by default on all platforms, in
Firefox Nightly 68. It has been developed behind the
'layout.css.contain.enabled' preference.

Other UAs shipping this or intending to ship it are:
* Chrome & other Blink-based UAs (Chrome has supported since Chrome 52,
released in 2016, according to caniuse 
).

Bug to turn on by default:
* https://bugzilla.mozilla.org/show_bug.cgi?id=1532471 is where I'll be
turning it on in Nightly by default (with a guard to only let it ship as
far as early-beta)
* https://bugzilla.mozilla.org/show_bug.cgi?id=1487493 is where I'll be
removing the guard & letting it ride the trains to release, assuming
everything goes well. (I anticipate that this will happen for the Firefox
68 release cycle.)

This feature was previously discussed in this "intent to implement" thread:
https://groups.google.com/d/msg/mozilla.dev.platform/q-uXVVClcU4/WswhXtlWqFIJ
(though the spec and the feature have evolved a good deal since then)

Note: we don't currently intend to ship support for one of the keywords
mentioned in the spec & which Chrome sort-of supports -- "contain:style".
Chrome doesn't implement this keyword properly/robustly, and we have doubts
about whether the complexity & maintenance burden of a correct
implementation would be justified by this keyword's limited use cases. See
https://github.com/w3c/csswg-drafts/issues/3280 for more on this. The CSS
Working group has marked this keyword as "at-risk", as a result of the
discussion around these concerns.

Many thanks to former-interns Morgan Reschenberg, Yusuf Sermet, and Kyle
Zentner for their hard work on this feature!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: -moz-prefixed CSS gradient functions

2018-12-18 Thread Daniel Holbert
The un-shipping didn't stick this time, either.  After just a few days of
Nightly testing, we had three reports of significant breakage that were
caused by this un-shipping (on zimbra[1], blogger[2], and a demo page for a
webapp framework[1] which may have legacy instances deployed).  So, I
backed out (re-enabling -moz prefixed gradient functions).

Given that we've attempted this un-shipping & it's bounced several times
over the years, I tend to think we can't un-ship this moz-prefixed syntax
after all.  The web unfortunately seems to depend on being able to UA sniff
& send -moz prefixed gradient CSS to Firefox-flavored browsers (with no
fallback CSS).

~Daniel

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1183994
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1512577
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1512224


On Thu, Nov 29, 2018 at 1:45 PM  wrote:

> On Thursday, April 13, 2017 at 6:22:48 PM UTC-7, Xidorn Quan wrote:
> > In bug 1337655 [1], I'm going to disable -moz-prefixed CSS gradient
> > functions by default.
>
> This didn't stick (back in 2017), because it broke some buttons on gmail
> (which was filed as https://bugzilla.mozilla.org/show_bug.cgi?id=1366526
> ).  But that was later fixed on Google's end, and we didn't get any other
> reports of breakage, so I've just re-landed this.
>
> So, consider this a resurrected "intent to un-ship" -moz-prefixed CSS
> gradient functions. :)
>
> For now, I've only disabled them for EARLY_BETA_OR_EARLIER (i.e. Firefox
> 65 nightly and first half of Firefox 65 beta period), to get some testing
> without affecting release builds.  But if we don't have any serious
> webcompat fallout, we can relax that restriction and disable them in
> release as well.
>
> The full-disabling is tracked in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1176496.
>
> I'll quote the rest of xidorn's original intent-to-unship, for extra
> context/background:
> > We would still have -webkit-prefixed version of those functions which is
> > part of the Compat spec [2]. The assumption is that there wouldn't be
> > too many pages which depend on -moz-prefixed ones without also having
> > the -webkit-prefixed counterpart.
> >
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1337655
> > [2] https://compat.spec.whatwg.org/#css-image-type
> >
> >
> > - Xidorn
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on WebKit prefix support in Gecko.

2018-07-06 Thread Daniel Holbert
On Wednesday, February 17, 2016 at 9:11:43 PM UTC-8, Mike Taylor wrote:
> Howdy dev-platform (cross-posting fx-team for maximum synergy),
> 
> A quick update on our progress of implementing the WebKit related deps 
> of Bug 1170774.
> 
> In Bug 1213126 we set layout.css.prefixes.webkit to true by default to 
> let it ride the trains and see if anything exploded. Not surprisingly, 
> some stuff blew up.
[...]
> The following things have been disabled:
[...]
> - Bug 1237720: put "-webkit-min-device-pixel-ratio" behind it's own pref 
> and disable it.
> 
> Supporting this fixed a number of mobile sites, but unfortunately broke 
> Google Docs on HiDPI devices. :( So it's disabled for now. Maybe it will 
> come back one day.

Update: I'm intending to land a pref-flip to turn this feature 
(-webkit-min-device-pixel-ratio) back on in 
https://bugzilla.mozilla.org/show_bug.cgi?id=1444139

The reason we backed it out before was because Google Docs used it in 
combination with a second feature ("content: " on 
non-pseudoelements) which we did not support -- and their site broke if we 
enabled support for -webkit-min-device-pixel-ratio without also supporting this 
other feature.

But now we do support that other feature, as of 
https://bugzilla.mozilla.org/show_bug.cgi?id=215083 -- hooray! So we should be 
able to reenable -webkit-min-device-pixel-ratio as well and get a webcompat win 
on mobile. (see webcompat issues tagged off of 
https://bugzilla.mozilla.org/show_bug.cgi?id=1176968 )
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unprefix grid-gap, grid-row-gap, and grid-column-gap and updating them to spec

2018-06-28 Thread Daniel Holbert
On Thu, Mar 29, 2018 at 8:19 AM, Mats Palmgren  wrote:

> Hi,
>
> In bug 1398482 I'm unprefixing the grid-gap, grid-row-gap, and
> grid-column-gap properties
>
[...]

>
> These properties also applies to Flexbox:
> https://drafts.csswg.org/css-align-3/#gap-flex
> We haven't implemented layout for that yet
>

Followup: Mihir Iyer has now landed a patch to make these properties work
in flexbox layout, over in
https://bugzilla.mozilla.org/show_bug.cgi?id=1398483

This functionality is in today's Nightly build. (See testcase on that bug
for an example.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Studying Lossy Image Compression Efficiency

2018-02-26 Thread Daniel Holbert
Ah, I missed that Mike had replied -- it sounds like archive.org's
Wayback Machine is the easier way to get at the study, as compared to
bothering Josh. :)

On 2/26/18 9:26 AM, Daniel Holbert wrote:
> The people.mozilla.org site was a general-purpose webserver for Mozilla
> folks, and it was decommissioned entirely over the past few years.  So,
> that's why the study link there is broken.
> 
> You'd have to ask Josh (CC'd) if he has reposted (or could repost) the
> study docs somewhere else.
> 
> ~Daniel
> 
> On 2/24/18 9:51 AM, audioscaven...@gmail.com wrote:
>> On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
>>> Blog post is here:
>>>
>>> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>>>
>>> Study is here:
>>>
>>> http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/
>>
>> Hi,
>> The link to the study is broken.
>> Is HEVC/BPG another abandoned project?
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Studying Lossy Image Compression Efficiency

2018-02-26 Thread Daniel Holbert
The people.mozilla.org site was a general-purpose webserver for Mozilla
folks, and it was decommissioned entirely over the past few years.  So,
that's why the study link there is broken.

You'd have to ask Josh (CC'd) if he has reposted (or could repost) the
study docs somewhere else.

~Daniel

On 2/24/18 9:51 AM, audioscaven...@gmail.com wrote:
> On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
>> Blog post is here:
>>
>> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>>
>> Study is here:
>>
>> http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/
> 
> Hi,
> The link to the study is broken.
> Is HEVC/BPG another abandoned project?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Inheriting annotations into included reftest.list files

2018-01-10 Thread Daniel Holbert
Agreed that this is footgunny & unexpected!

I'd lean slightly towards allowing the syntax and making it actually skip
the include expression.  This construct seems valuable to have in our
toolbox (to be used only sparingly, e.g. for cases of platform-specific
features).

(Based on a quick grep[1], it looks like we only do this 11 times in
layout/reftests right now, which is about what I'd hope/expect.)

[1] grep -r "skip.*include" layout/reftests | wc -l

On Wed, Jan 10, 2018 at 11:30 AM, Kartikaya Gupta 
wrote:

> Another option would be to keep allowing this syntax of "skip-if(x)
> include some/reftest.list" but actually make it skip the entire
> include if the condition "x" is true.
>
> On Wed, Jan 10, 2018 at 10:49 AM, Kartikaya Gupta 
> wrote:
> > This will probably come as a surprise to many (as it does to me each
> > time I rediscover it), but if, in a reftest.list file, you do
> > something like this (real example from [1]):
> >
> > skip-if(browserIsRemote) include ogg-video/reftest.list
> >
> > this may not do what you expect. My expectation, at least, is that the
> > entire ogg-video/reftest.list file is skipped if the "browserIsRemote"
> > condition is true.
> >
> > However, what *actually* happens is that the skip-if expected status
> > (which is EXPECTED_DEATH [2]) gets "inherited" down into the included
> > reftest.list, and if there's a higher-valued [3] annotation on a
> > reftest inside that included list, then that will "win". So in
> > practice, this means that any reftest inside ogg-video/reftest.list
> > that has a fuzzy() annotation, or a fuzzy-if(x) annotation where x is
> > true, will still run.
> >
> > This seems like a very unexpected result, and looking through some of
> > the cases where this happens it's not at all clear to me if this was
> > intentional, or if these tests are just running accidentally because
> > nobody realized this would happen.
> >
> > I'm happy to make changes to the reftest manifest parser to remove
> > this footgun (most likely by disallowing annotations on include
> > statements) but we would need to go through each instance of this in
> > the reftest.list files and fix things up so that the tests that are
> > running are in line with the expectations of whoever added/owns the
> > tests.
> >
> > I wanted to open this up for discussion in case people have any
> > thoughts on it before I move forward and try to clean this up.
> >
> > [1] https://searchfox.org/mozilla-central/rev/
> 03877052c151a8f062eea177f684a2743cd7b1d5/layout/reftests/reftest.list#275
> > [2] https://searchfox.org/mozilla-central/rev/
> 03877052c151a8f062eea177f684a2743cd7b1d5/layout/tools/
> reftest/manifest.jsm#228
> > [3] https://searchfox.org/mozilla-central/rev/
> 03877052c151a8f062eea177f684a2743cd7b1d5/layout/tools/
> reftest/globals.jsm#42
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-18 Thread Daniel Holbert
The very basics (at least) are semi-documented here:

https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Committing_Rules_and_Responsibilities#Checkin_comment

I frequently point people at that ^^ when they get it wrong (and e.g.
use the bug title as the commit message).

~Daniel


On 4/18/17 12:00 AM, Anne van Kesteren wrote:
> On Mon, Apr 17, 2017 at 5:16 PM, Boris Zbarsky  wrote:
>> A quick reminder to patch authors and reviewers.
> 
> Is this documented somewhere so you can just point folks to
> documentation if they get it wrong? E.g., for WHATWG standards there
> is a README.md that (sometimes indirectly) points to
> https://github.com/erlang/otp/wiki/Writing-good-commit-messages for
> general commit message guidelines on top of which WHATWG has some of
> their own conventions.
> 
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Watch out for a rogue nsILayoutHistoryState.h

2017-03-28 Thread Daniel Holbert
It seems that generally, we don't need a clobber when adding new IDL
files -- but this was a special case where we do (and I've just pushed a
CLOBBER-tweak followup to inbound).

See https://bugzilla.mozilla.org/show_bug.cgi?id=1351461#c4 for a theory
about why.

~Daniel


On 3/28/17 1:43 PM, Andrew McCreight wrote:
> I updated my tree this morning and did a build, and I ended up with a new
> untracked
> nsILayoutHistoryState.h file. Be sure to not accidentally commit it.
> 
> I deleted the file, and then had to do a clobber build to fix everything.
> 
> I filed bug 1351461 for the clobber issue or whatever it is.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: adding vector effects non-scaling-size, non-rotation and fixed-position to SVG

2017-01-26 Thread Daniel Holbert
On 1/3/17 4:48 PM, ktecrami...@gmail.com wrote:
>> e.g. It seems like introductory notes example could just use a
>> separate SVG element that had fixed positioning instead of needing to build
>> fixed-position into SVG.
> 
> By "introductory notes example" do you mean the example in following link?
> https://svgwg.org/svg2-draft/coords.html#VectorEffects 

I think that's what Jeff meant, yes.

(Basically, can authors already solve the problems that
"vector-effect:fixed-position" is intended to solve, by simply creating
two different  elements (in an HTML context), and styling one with
"position: fixed">?  Here's a simple demo with a floating fixed-position
legend, using that sort of solution:
https://jsfiddle.net/vv9gwemr/
)

Also, Jeff asked another important question -- have other browsers
expressed any intent to implement these features?

Also note that this feature is marked as "at-risk" of being dropped from
this version of the SVG2 spec:
"ISSUE 31: Values of vector-effect other than non-scaling-stroke and
none are at risk of being dropped from SVG 2 due to a lack of
implementations."
https://svgwg.org/svg2-draft/coords.html#issue31

Looks like that warning was added here:
https://github.com/w3c/svgwg/issues/186

If no other browsers are intending to implement it, then there's a
serious question of whether it's worth taking on this added code & the
increased complexity/attack-surface/bug-surface/maintenance-burden that
new features inevitably bring along, if web developers aren't going to
be able to use the feature on the web due to lack of support in other
browsers.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: CSS display:flow-root

2016-12-21 Thread Daniel Holbert
On 12/21/2016 10:57 AM, mtana...@yandex.ru wrote:
> Fwiw, there is also a feature request for Edge:
> 
> https://wpdev.uservoice.com/forums/257854/suggestions/17420707
[...]
> https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/10152756/
> 
> For some reason cannot add any of those two URLs to the “See Also” field in 
> Bugzilla: it rejects with the message “Invalid Bug URL”

This linking issue is tracked in
https://bugzilla.mozilla.org/show_bug.cgi?id=1322371 (at least for
developer.microsoft.com, and I added a note there about the uservoice
URLs as well).

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux content sandbox tightened

2016-10-07 Thread Daniel Holbert
On 10/07/2016 12:49 AM, Gian-Carlo Pascutto wrote:
> This behavior can be controlled via a pref:
> pref("security.sandbox.content.level", 2);
> 
> Reverting this to 1 goes back to the previous behavior

Warning: don't actually try to revert this to 1, just yet -- at the
moment, that triggers startup crashes as described in
https://bugzilla.mozilla.org/show_bug.cgi?id=1308568

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NS_WARN_IF_FALSE has been renamed NS_WARNING_ASSERTION

2016-09-21 Thread Daniel Holbert
On 09/21/2016 08:41 PM, Samael Wang wrote:
> The NS_ASSERTION document [1] says "Don't use NS_ASSERTION", which could be a 
> bit confusing that now some of the similar named macros may be deprecated but 
> some are new and encouraged.

I think that document's advice is too severe.

roc made a compelling case for *preferring* non-fatal assertions (i.e.
NS_ASSERTION) in many scenarios, particularly when failure is not
catastrophic.  See his blog post here for more details:
 http://robert.ocallahan.org/2011/12/case-for-non-fatal-assertions.html

I found (and still find) his argument there quite compelling.  I think
it's important that debug builds are able to proceed past (inevitable)
unexpected-but-not-horrible conditions -- while at the same time,
perhaps shouting about those unexpected conditions noisily so that
people can report them if they care to do so (and so that test suites fail).

Personally, I use *both* MOZ_ASSERT and NS_ASSERTION, in different
scenarios, depending on how catastrophic it would be if the asserted
condition doesn't hold, and depending on how sure I am that the
condition does-currently & must-always-in-the-future hold true.

> Could we possibly have a section in the coding guideline explaining all these 
> NS_ASSERTION / MOZ_ASSERT / NS_ENSURE_* / NS_WARN_IF and other related 
> macros, and point (or add a link on) these macros' MDN reference pages to the 
> guideline so new contributors can have an overview of when to / not-to use 
> these macros?

That is a great idea!  I would support the creation of such a page. :)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: NS_WARN_IF_FALSE has been renamed NS_WARNING_ASSERTION

2016-09-21 Thread Daniel Holbert
On 09/21/2016 12:48 PM, ISHIKAWA,chiaki wrote:
> In the following URL about coding style,
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> --- begin quote ---
> if (NS_WARN_IF(somethingthatshouldbetrue)) {
>   return NS_ERROR_INVALID_ARG;
> }
> 
> if (NS_WARN_IF(NS_FAILED(rv))) {
>   return rv;
> }
> 
> --- end quote 
> 
> I am not a native speaker of English, but shouldn't the variable name in
> the first if-statement example be
> |somethingthatshouldNOTbetrue| instead of |somethingthatshouldbetrue|?

You're very right!  I've fixed it to say "somethingthatshouldbefalse"
(s/true/false/):
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style$compare?locale=en-US=1122081=1086401

> If so, Japanese, Chinese and Russian translation ought to be changed as
> well.

I am not qualified to fix those. I'm hoping you & others can take care
of that, perhaps! :)

> I may have found a few code fragments that may have been miswritten due
> to misunderstanding.

Yikes! Please file bugs on those. Hopefully they are just cases of of us
accidentally warning in the wrong condition, rather than us actually
behaving incorrectly.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2016-09-12 Thread Daniel Holbert
On 09/12/2016 12:22 PM, Boris Zbarsky wrote:
> It's worse than that.  For a wide range of callers (anyone handing the
> ref across threads), the caller must check immediately whether the
> callee actually took the value and if not make sure things are released
> on the proper thread...

Ah, ok; I can see that complicating the logic in the caller
considerably, if the caller really had to worry about that.

Seems like we should enforce that callees must do the cleanup in that
sort of case (and already_AddRefed does indeed enforce that, which is
nice).

>> I guess my bottom-line question is: which pattern should I prefer, for
>> new code that wants to trivially transfer ownership of a refcounted
>> object into a function or a constructor?
> 
> Are threads involved?  ;)

Nope!

> My gut feeling is that we should use the safer pattern with
> already_AddRefed just so people don't start using the other pattern in
> anything resembling multithreaded code, though in your particular case
> both patterns would be fine...

OK, that's reasonable -- thanks.

~Daniel

P.S. As I realized when replying to khuey -- part of my motivation here
was for consistency with UniquePtr, where the
documentation/best-practices explicitly call for using Move() and
UniquePtr&& to transfer ownership.  It seems like it'd be nice if our
other smart pointer types could have consistent best-practices for
similar operations.  But, maybe that's asking too much, in the presence
of these thread complications. :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2016-09-12 Thread Daniel Holbert
On 09/12/2016 11:00 AM, Boris Zbarsky wrote:
> On 9/12/16 1:53 PM, Daniel Holbert wrote:
>> (I believe we have the "already_AddRefed as a parameter" pattern in our
>> code in order to avoid an extra addref/release when ownership is being
>> transferred into a function.
> 
> And assertions if the function does not actually take the reference...
>> But, RefPtr/nsCOMPtr with move references
>> would let us do that just as well
> 
> Not the assertions part, right?

Right -- this sort of code with Move() and a RefPtr<>&& param would be
missing those assertions.

I don't see that as a *huge* downfall, though.  Those assertions have a
few benefits for already_AddRefed:
 (1) They guard against leaks from unused already_AddRefed.
 (2) They guard against logic errors where we should've taken the value
but forgot to do so.  (And it enforces that callees must take the value,
even if they have some error handling cases where they'd prefer not to.)

So I think (1) is irrelevant here, because with Move() the object will
always be stored in a refcounted pointer of some sort.  I think (2) is
semi-relevant.  It brings up one potential concern with Move: since the
callee might not take the value (intentionally or unintentionally), the
caller must *refrain* from caring about the state of its Move()'d
variable, between the Move() operation and any reassignment/cleanup.
(This seems like something that static analysis could check for us &
could guarantee is a non-issue. And it seems like a standard part of the
Move bargain that reviewers should know to watch for, as soon as they're
familiar with Move.)

I guess my bottom-line question is: which pattern should I prefer, for
new code that wants to trivially transfer ownership of a refcounted
object into a function or a constructor?

(In my current case, I'm looking at a constructor much like the
nsDOMMutationObserver example that I linked to, with a parameter that
gets directly assigned in an init list -- so, there's no actual risk of
logic preventing it from getting taken.)

I'm OK with continuing to prefer already_AddRefed, if I should; though I
was initially hoping that this is one spot where we could get rid of it
(with the upsides being brevity, consistency in type-of-smart-pointer,
and fewer mozilla-specific quirks).

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2016-09-12 Thread Daniel Holbert
On 08/12/2014 07:59 AM, Benjamin Smedberg wrote:
> But now that nsCOMPtr/nsRefPtr support proper move constructors, [...]
> Could we replace every already_AddRefed return value with a nsCOMPtr?

I have a variant of bsmedberg's original question here. He asked about
return values, but I'm wondering:
Could we replace every already_AddRefed *function-parameter* with a move
reference to a RefPtr/nsCOMPtr?

(Sorry if this was answered in this thread; I skimmed through it &
didn't find a definitive answer immediately. It seems like this might be
a strictly easier thing to fix up, as compared to return values.)

As a concrete example, at this usage site...
https://dxr.mozilla.org/mozilla-central/rev/1851b78b5a9673ee422f189b92e5f1e86b82a01c/dom/base/nsDOMMutationObserver.h#468
...is there any benefit to us using
already_AddRefed&& aOwner, instead of
nsCOMPtr&&?

(I believe we have the "already_AddRefed as a parameter" pattern in our
code in order to avoid an extra addref/release when ownership is being
transferred into a function.  But, RefPtr/nsCOMPtr with move references
would let us do that just as well, more concisely & less arcanely.)

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: nsCSSProperty is being renamed to nsCSSPropertyID -- local patches may need updates

2016-08-15 Thread Daniel Holbert
Just a heads-up, for anyone working on layout / style-system-related code:

In bug 1293739 [1], Jonathan Chan has some patches that will rename the
enumerated type "nsCSSProperty" to now be named "nsCSSPropertyID"
(adding the "ID" suffix). I plan on landing his patches on inbound later
today.

It's likely that some people have local layout/style-system-related
patches that will no longer apply cleanly (or no longer compile) as a
result of this change.  If you find yourself in this category, you
should be able to fix this by e.g. using a text editor to do
search-and-replace to change s/nsCSSProperty/nsCSSPropertyID/ throughout
your local patches.

(We're doing this rename in the service of implementing CSS Properties &
Values Houdini API[2].  In that bug, we'll be re-introducing
"nsCSSProperty" as a broader type that includes custom author-defined
CSS properties *as well as* our normal enumerated properties.  And this
renamed nsCSSPropertyID enum-type is restricted to only include the
(enumerated) CSS properties that we have built-in support for.)

Thanks,
~Daniel

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1293739
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1273706
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: flexbox issue in firefox

2016-08-08 Thread Daniel Holbert
On 08/08/2016 10:12 AM, Daniel Holbert wrote:
> The fix in Firefox should be a pretty simple change, I think, but
> unfortunately we haven't gotten to it yet.  I can prioritize it to fix
> in the next week or so, though that still means it wouldn't reach
> Firefox release users for another few months (in Firefox 51, due out in
> late January it looks like).  Not sure what that means for your
> particular use-case

Oh, I just notice Emilio already posted patches for our bug - great!

(I'm just returning from vacation (back in the office tomorrow), so I'm
watching bugmail super-closely at the moment. :))
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: flexbox issue in firefox

2016-08-08 Thread Daniel Holbert
I believe you'd need to use a definite "height" value on the flex
container or one of its descendants.  (instead of -- or along with --
the explicit "max-height" that you're using right now)  For example:
 https://jsfiddle.net/04o1kwfd/4/

(I suspect you may not want an explicit "height", but I don't know any
way to fix the testcase without that, unfortunately.)

In your provided testcase, the children's heights can't be resolved
against the container's height (and can't be fit to the container-height
using "align-items:stretch"), because they're really sized within their
(invisible) *flex line*, which in this case is sized to the children --
not to the container.

If you provide an explicit height on the flex container, then the spec
mandates that this explicit height becomes the flex line's height as
well -- and then the children will stretch to that size.  So, that's why
an explicit height helps. And the spec-change that we need to implement
here would similarly constrain the height of the flex line using the
max-height, as described here:
 https://hg.csswg.org/drafts/rev/9b2355f9ccf3#l1.10

The fix in Firefox should be a pretty simple change, I think, but
unfortunately we haven't gotten to it yet.  I can prioritize it to fix
in the next week or so, though that still means it wouldn't reach
Firefox release users for another few months (in Firefox 51, due out in
late January it looks like).  Not sure what that means for your
particular use-case.

~Daniel


On 08/07/2016 09:35 PM, Amit Zur wrote:
> Thank you Daniel,
> 
> Do you know if there's a way around it to achieve the goal of having the 
> content shown with auto height (and max-height) and scrollable inner 
> containers?
> 
> Also, is it planned to ship in a near nightly?
> 
> On Sunday, August 7, 2016 at 10:23:01 PM UTC+3, Daniel Holbert wrote:
>> Firefox's behavior on that testcase matches an older version of the spec
>> (and then the spec changed).
>>
>> This bug...
>>   https://bugzilla.mozilla.org/show_bug.cgi?id=1000957
>> ...is filed on bringing us up-to-date on that point.
>>
>> ~Daniel
>>
>>
>> On 08/07/2016 05:16 AM, Amit Zur wrote:
>>> Hey,
>>>
>>> Take a look at this fiddle: https://jsfiddle.net/04o1kwfd/1/
>>> The 2 colored panels should be taking the size of their container, and the 
>>> left one should have a scrollbar.
>>> However in firefox they overflow the container. I'm aware of the min-size 
>>> issue with flex items, but no min-height that I tried would work here.
>>>
>>> Any ideas?
>>>
>>> Amit
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: flexbox issue in firefox

2016-08-07 Thread Daniel Holbert
Firefox's behavior on that testcase matches an older version of the spec
(and then the spec changed).

This bug...
  https://bugzilla.mozilla.org/show_bug.cgi?id=1000957
...is filed on bringing us up-to-date on that point.

~Daniel


On 08/07/2016 05:16 AM, Amit Zur wrote:
> Hey,
> 
> Take a look at this fiddle: https://jsfiddle.net/04o1kwfd/1/
> The 2 colored panels should be taking the size of their container, and the 
> left one should have a scrollbar.
> However in firefox they overflow the container. I'm aware of the min-size 
> issue with flex items, but no min-height that I tried would work here.
> 
> Any ideas?
> 
> Amit
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: CSS Houdini - Properties & Values API Level 1

2016-07-25 Thread Daniel Holbert
On 07/25/2016 07:11 AM, Ms2ger wrote:
> Hey Jonathan,
[...]
> Do we know how other vendors feel about this?

Sentiment seems to be positive.

Browser vendors are collaborating on developing the Houdini specs, and I
haven't heard any serious reservations on this spec. (This is among the
more simple/stable of the Houdini family of specs.)

I believe we're not the only ones working on an implementation, too --
Google has a work-in-progress implementation of the Houdini "CSS Paint"
API (with a brief demo video here [1]), and that API layers on top of
this feature ("css properties & values"), which I think means they're
also working on implementing this feature.

> Are there automated tests that will be shared with other vendors (and
> Servo)?

There are some reftests on the bug [2] (final patch).

~Daniel

[1] https://www.youtube.com/watch?v=AfiaReDetZE
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1273706
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement & ship: support for a subset of -webkit prefixed CSS properties & features

2016-05-13 Thread Daniel Holbert
On 05/13/2016 10:49 AM, Jet Villegas wrote:
> If I'm reading the dependency list correctly, we still plan to uplift to
> 48 if we can get bug 1264905 fixed in time. Is that correct?

bug 1264905's fix (a pref-unguarding) was just landed, as well.

We could uplift both, if we *also* uplift bug 1269971 (which was just
fixed yesterday, and which bug 1264905 depends on).

I'm instinctively uneasy about that, since that bug (bug 1269971) is
basically a reimplementation of the feature in question.
(background-clip:text)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement & ship: support for a subset of -webkit prefixed CSS properties & features

2016-05-13 Thread Daniel Holbert
On 12/30/2015 10:40 PM, Daniel Holbert wrote:
> Estimated or target release:
>   Firefox 46 (current Nightly), or 47 if we need to hold it back a
> release to fix things.
> 
> Preference behind which this will be implemented:
>   layout.css.prefixes.webkit

Following up on this -- this feature will be default-enabled in Firefox
49, as of the pref-unguarding-patch that I just landed on this bug:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1259345

(This feature has been enabled & getting very useful testing & having
bugs filed in Nightly/Aurora ever since Firefox 46. At this point, we've
fixed all known regressions that are triggered by this feature, so we're
calling it safe to ship in Firefox 49, and we'll be watching for any
more bugs that are reported.)

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement/Ship: -webkit-text-stroke

2016-04-14 Thread Daniel Holbert
On 04/14/2016 02:40 AM, Ms2ger wrote:
>> Preference behind which this will be implemented:
>> layout.css.prefixes.webkit
> 
> Should this have a more specific pref?

Absent a compelling reason, no -- it should not.

We're using layout.css.prefixes.webkit here because, without this
-webkit-text-stroke feature, our webkit-prefix support breaks sites.
So, we can't really ship -webkit prefix support without also shipping
support for this feature.  And the fewer prefs we have to keep track of
(and to have to worry about enabling/disabling if & when we discover
trouble at the last minute), the better.

Specifically, the dependency is as follows:
 (1) The web depends on "-webkit-linear-gradient" as a background.

 (2) *But*, it turns out that one common use-case for
-webkit-linear-gradient is to create a background *which is only
intended to be viewed through transparent text*.  Some sites (bloomberg
news at least) use "-webkit-text-stroke" as part of this effect.

 (3) If we enable layout.css.prefixes.webkit without enabling this one
feature (-webkit-text-stroke), Bloomberg's pull-quotes are unreadable.
Screenshot of what that would look like (taken in Chrome, with the
-webkit-text-stroke declaration manually disabled via devtools):
  https://bug1248644.bmoattachments.org/attachment.cgi?id=8719912

(It's a bit more complicated than this; see discussion on
https://bugzilla.mozilla.org/show_bug.cgi?id=1248644 for more details.)

SO: there isn't any graceful fallback if we selectively shipped other
webkit prefixing support without *also* shipping support for this
feature, as shown by the screenshot above. So, it makes sense to combine
it under the umbrella of the same pref.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dump frame tree in real time

2016-04-08 Thread Daniel Holbert
On 04/08/2016 02:55 PM, Daniel Holbert wrote:
> On 04/08/2016 10:38 AM, Jip de Beer wrote:
>> I didn't manage to dump the Frame Tree using lldb... I followed these guides:
> [...]
>> I tried with Firefox, FirefoxDeveloperEdition and the nightly build (ran 
>> lldb from Terminal as well as Xcode).
>> I was able to attach lldb to the browser, but not output a Frame Tree dump.
>> The code for the nightly build was unmodified. When I tried to define 
>> DEBUG_FRAME_DUMP in layout/generic/nsFrameList.h (by uncommenting a block of 
>> code) the build failed.
> 
> Please report that build failure as a bug with more details!  We likely
> are accidentally intermixing assumptions about #ifdef DEBUG and #ifdef
> DEBUG_FRAME_DUMP or something

(BTW, I tested this locally, and I didn't hit any build failures. So I
may be doing something different from you, and I'm not as sure there's a
real build-error bug hiding here after all -- it's possible you were
tripping over an unrelated build error of some sort.  But, if you can
still reproduce this build error, please do report it (including details
on the change that you made and the build error output), here:
https://bugzilla.mozilla.org/enter_bug.cgi?product=Core=Layout
)

Anyway, as noted in my previous email, even if this nsFrameList.h tweak
works, the better-supported interactive way to get frametree dumps is to
generate a build with ac_add_options --enable-debug in your mozconfig,
and to use the layout debugger's "Dump" menu.  We conceivably *could*
add instrumentation to normal builds, but this would require shipping
extra Gecko code that a vanishingly small subset of users would actually
benefit from, so it doesn't seem worth it.  Depending on the use case,
it might be better to just write & expose a more generic
element-bounding-box-dumping tool in devtools.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dump frame tree in real time

2016-04-08 Thread Daniel Holbert


On 04/08/2016 10:38 AM, Jip de Beer wrote:
> I didn't manage to dump the Frame Tree using lldb... I followed these guides:
[...]
> I tried with Firefox, FirefoxDeveloperEdition and the nightly build (ran lldb 
> from Terminal as well as Xcode).
> I was able to attach lldb to the browser, but not output a Frame Tree dump.
> The code for the nightly build was unmodified. When I tried to define 
> DEBUG_FRAME_DUMP in layout/generic/nsFrameList.h (by uncommenting a block of 
> code) the build failed.

Please report that build failure as a bug with more details!  We likely
are accidentally intermixing assumptions about #ifdef DEBUG and #ifdef
DEBUG_FRAME_DUMP or something

Anyway -- as it seems you noticed, our frametree-dumping code is guarded
by that exact #define -- DEBUG_FRAME_DUMP -- which means the code is not
present in any of the builds that you tested. (Firefox, Nightly,
DevEdition.)  So, the code is only present in debug builds, or builds
with MOZ_DUMP_PAINTING enabled at compile-time.

Your best bet is probably to:
 (1) Build or download a debug version of Firefox. (If you build it
yourself, put ac_add_options --enable-debug in your .mozconfig. Or, you
can download a debug build from today at
http://archive.mozilla.org/pub/firefox/tinderbox-builds/mozilla-central-macosx64-debug/1460109722/
)

 (2) Start the layout debugger (Tools|Layout Debugger) -- this menu
option is only present in debug builds.

 (3) Load whatever site you want in the tiny layout-debugger browser
that appears (with minimal-UI)

 (4) Dump the frame tree (to your terminal) using Dump|Frames in the
layout debugger window.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: -webkit-background-clip:text

2016-03-22 Thread Daniel Holbert
On 03/22/2016 04:53 PM, Jet Villegas wrote:
> I'm thinking we need two prefs so we cover the prefixed and unprefixed
> API as per:
> https://lists.w3.org/Archives/Public/www-style/2016Mar/0283.html
> 
> It's a bit odd to have the -webkit parser pref also control the
> rendering pref in this case.

Yeah... I guess I agree.

(I guess I can also imagine a future where we're ready to ship
"background-clip:text" because it's ready, though we still want to hold
off on shipping webkit prefixing support in general, e.g. because of
some as-yet-undiscovered new source of compat regressions.  In that
world, it would be useful to have this feature individually preffable
separately from webkit prefix support.)

So, I'll retract my pushback & agree with giving this its own pref after
all (something like "layout.css.background-clip-text.enabled" per my
earlier response).

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: -webkit-background-clip:text

2016-03-22 Thread Daniel Holbert
On 03/22/2016 02:16 PM, Mike Taylor wrote:
> +1. It has been nice to have a single pref to flip to test for potential
> regressions (and for instructing people to do the same thing).

That's probably not a big concern here -- even if we ship this with its
own dedicated feature-pref, we could still test for potential webcompat
regressions by toggling the main webkit pref on & off, since that would
control whether the "-webkit-background-clip" alias was enabled at all.
(And realistically, people only use this feature via that alias -- not
via the standard "background-clip" property.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: -webkit-background-clip:text

2016-03-22 Thread Daniel Holbert
On 03/21/2016 10:38 PM, Jet Villegas wrote:
> I'd like to see this guarded by its own pref && layout.css.prefixes.webkit

Pushing back on this slightly:
 - At this point, I don't think it's conceivable that we'd want to ship
our webkit compatibility work until we're ready to also ship support for
"-webkit-background-clip:text".  (If we're missing that feature, -webkit
gradient support ends up causing too many visual regressions to be
shippable, as noted at
https://lists.w3.org/Archives/Public/www-style/2016Mar/0290.html )

 - Therefore, I'm not sure we get any real-world benefit from guarding
this feature with an additional dedicated pref.  And there'd be a
complexity cost from making sure we test all combinations of pref
enabled/disabled states, & do the right thing when one or the other pref
is disabled.

So, I'm not sure the cost/benefit calculus of adding a new, dedicated
pref adds up here.

However, if we do add another pref here, I think we'd want to just have
it directly control this "text" value, independent of webkit prefixing
support. Specifically, I imagine we'd have these prefs:

 (1) layout.css.background-clip-text.enabled (new) to control whether
"background-clip: text" is supported

 (2) layout.css.prefixes.webkit (existing) to control whether
"-webkit-background-clip" is an alias for "background-clip".  (This part
is already done.)

(This way, "-webkit-background-clip:text" would *effectively* be
disabled unless both prefs are on -- but we wouldn't need to do any
multi-pref checks anywhere in the code.  There'd be no need for "own
pref && layout.css.prefixes.webkit" guarding.)

This configuration makes sense to me - though I'm still not sure it adds
value over just reusing the same pref, since as noted above we can't
really ship one without the other.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To bump mochitest's timeout from 45 seconds to 90 seconds

2016-02-12 Thread Daniel Holbert
On 02/12/2016 11:02 AM, Armen Zambrano G. wrote:
> On 16-02-09 01:32 PM, Daniel Holbert wrote:
>> Just to clarify, you're *only* talking about browser-chrome mochitests
>> here, correct?  (not other mochitest suites like mochitest-plain)
>>
>> (It looks like this is the case, based on the bug, but your dev.platform
>> post here made it sound like this change affected all mochitests.)
>>
>> Thanks,
>> ~Daniel
> I'm interested in browser-chrome. Is there a different variable for
> those tests?

I don't know anything about browser-chrome tests. But I do know that
mochitest-plain at least has a 5 minute (300 second) timeout, by
default. (That's what SimpleTest.requestLongerTimeout(...) operates on
for those tests.)

I believe that's defined here:
http://mxr.mozilla.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/TestRunner.js?rev=debc5269aeac#91

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Please stop revving UUIDs when changing XPIDL interface

2016-02-07 Thread Daniel Holbert
We still have documentation on MDN with the old rules:

  "The uuid must be changed anytime any part of the
   interface or its ancestors are changed"
https://developer.mozilla.org/en-US/docs/Mozilla/XPIDL#Interfaces

Ehsan, could you update that section of the page to reflect the new
state of the world?

Thanks!
~Daniel

On 02/01/2016 07:19 AM, Ehsan Akhgari wrote:
> On Mon, Feb 1, 2016 at 12:42 AM, Gregory Szorc  wrote:
> 
>> On Fri, Jan 29, 2016 at 1:52 PM, Ehsan Akhgari 
>> wrote:
>>
>>> (Sending this in another thread in case people didn't see my note at the
>>> end of the original thread.)
>>>
>>> The new rules are in effect for mozilla-central and the repositories that
>>> merge into it.  Revving UUIDs is no longer necessary.
>>>
>>
>> Is `mach update-uuids` still necessary? If not, can we get a bug on file
>> to remove it?
>>
> 
> No, filed bug 1244736 to remove it.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 12:19 AM, Daniel Holbert wrote:
> I wasn't able to
> remotely figure out what the piece of spyware was or how to remove it --
> but the rejected certs reported their issuer as being "Digital Marketing
> Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
> turn up anything useful, unfortunately; so I suspect this is "niche"
> spyware, or perhaps the name is dynamically generated.)

UPDATE: in my family friend's case, the shoddy MITM spyware in question
was "Simmons Connect Research Application", a consumer profiling tool
that's tied to Experian which users can voluntarily install in exchange
for points that you can use to buy stuff.

She was able to fix the problem by uninstalling that program (simmons
connect research application).

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
For reference, I've now filed a bug to cover outreach for the specific
tool that this user was using:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1236664

I'm also trying to get my hands on the software, but it's "invitation
only", so that may prove difficult.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 10:33 AM, Josh Matthews wrote:
> Wouldn't the SSL cert failures also prevent submitting the telemetry
> payload to Mozilla's servers?

Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
for that matter! (I'm assuming the update-check is performed over HTTPS.)

So there might be literally nothing we can do to improve the situation
for these users, from a changes-to-Firefox perspective.

Even if we wanted to take the extreme measure of issuing an update to
delay our new-SHA1-certs-not-trusted-after date (extremely
hypothetical), this wouldn't help users who are affected by this
problem, because they couldn't receive the update (I think).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 12:07 PM, Daniel Holbert wrote:
> UPDATE: in my family friend's case, the shoddy MITM spyware in question
> was "Simmons Connect Research Application", a consumer profiling tool
> that's tied to Experian which users can voluntarily install in exchange
> for points that you can use to buy stuff.

I reached out to Experian on Twitter:
 https://twitter.com/CodingExon/status/684105591288008704
...and also via a web form on one of their Simmons Connect pages.

I also sent the following to
http://www.digitalmarketresearchapps.com/contact.html , which seems to
be the HTTPS interception library that they're using:
==
Hi,
I'm a software engineer at Mozilla, working on the Firefox web browser,
and I'm contacting you about something extremely urgent -- I'm hoping to
reach an engineer who works on your HTTPS interception library/tool.

As of January 1st (several days ago), your tool *entirely breaks* HTTPS
connections in Firefox, due to your tool's reliance on a deprecated
security algorithm called SHA1. The importance of this is hard to
overstate -- for users who have your tool installed, their internet
access is *completely* broken, including their ability to download
browser updates.  Chrome users are (or will soon be) affected as well,
and Internet Explorer/Edge users will be affected at some point in the
next year -- all browsers are coordinating on phasing out SHA1
certificate support.

Specifically:
Based on a user report, it seems "Digital Market Research Apps" is
issuing certificates for a consumer profiling tool called "Simmons
Connect".  As of January 1st, this user was unable to visit any HTTPS
site in Firefox, because the tool was providing newly-generated
certificates using the obsolete SHA1 algorithm.  And per
https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
, such certificates are treated as untrusted.

Please contact me as soon as possible.  For users with your software
installed, it's of the utmost urgency that you issue an update, to make
your certificates use a newer algorithm than SHA1.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
Heads-up, from a user-complaint/ support / "keep an eye out for this"
perspective:
 * Starting January 1st 2016 (a few days ago), Firefox rejects
recently-issued SSL certs that use the (obsolete) SHA1 hash algorithm.[1]

 * For users who unknowingly have a local SSL proxy on their machine
from spyware/adware/antivirus (stuff like superfish), this may cause
*all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
autogenerated certificates.  (Every cert that gets sent to Firefox will
use SHA1 and will have an issued date of "just now", which is after
January 1 2016; hence, the cert is untrusted, even if the spyware put
its root in our root store.)

 * I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing a
support article about it if we haven't already. (Not sure there's much
help we can offer, since removing spyware correctly/completely can be
tricky and varies on a case by case basis.)

(Context: I received a family-friend-Firefox-support phone call today,
who this had this exact problem.  Every HTTPS site was broken for her in
Firefox, since January 1st.  IE worked as expected (that is, it happily
accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
remotely figure out what the piece of spyware was or how to remove it --
but the rejected certs reported their issuer as being "Digital Marketing
Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
turn up anything useful, unfortunately; so I suspect this is "niche"
spyware, or perhaps the name is dynamically generated.)

Anyway -- I have a feeling this will be somewhat-widespread problem,
among users who have spyware (and perhaps crufty "secure browsing"
antivirus tools) installed.

~Daniel

[1]
https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 09:47 AM, Eric Rescorla wrote:
> I believe that Chrome will be making a similar change at a similar time
> 
> -Ekr

In contrast, it looks like IE & Edge will continue accepting SHA1 certs
on the web for another full year, until 2017. [1][2]

~Daniel

[1] https://blogs.windows.com/msedgedev/2015/11/04/sha-1-deprecation-update/
[2]
https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/973/102/important-change-announcement---deprecation-of-sha-1
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 10:21 AM, Adam Roach wrote:
> I propose that we minimally should collect telemetry around this
> condition. It should be pretty easy to detect: look for cases where we
> reject very young SHA-1 certs that chain back to a CA we don't ship.
> Once we know the scope of the problem, we can make informed decisions
> about how urgent our subsequent actions should be.

I had a similar thought, but I think it's too late for such telemetry to
be effective. The vast majority of users who are affected will have
already stopped using Firefox, or will immediately do so, as soon as
they discover that their webmail, bank, google, facebook, etc. don't work.

(We could have used this sort of telemetry before Jan 1 if we'd forseen
this potential problem.  I don't blame us for not forseeing this, though.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 10:18 AM, Eric Rescorla wrote:
> I believe you are confusing two different things.
> 
> 1. Whether the browser supports SHA-1 certificates at all.
> 2. Whether the browser supports SHA-1 certificates signed after Jan 1 2016
> (The CA/BF Baseline Requirements forbid this, so no publicly valid
> certificate
> should fall into this category).
> 
> It's not clear to me how IE/Edge are behaving with respect to #2.

Sorry, I wasn't clear.

What I was saying was
 * The definitive statements I've found from/about MS about SHA1 on the
web *only* mention your point #1 (and do not mention anything about #2).
 * My one data point, from this affected user, indicates that IE still
works just fine with freshly-minted SHA1 certs.

So, in the absence of statements about #2 (and in the presence of proof
otherwise), I see no reason to think Microsoft is taking action on that
point.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement & ship: support for a subset of -webkit prefixed CSS properties & features

2015-12-31 Thread Daniel Holbert
On 12/31/2015 11:37 AM, Martin Thomson wrote:
> If we intend to continue to support these,

(We do.)

> and particularly if we
> anticipate more prefixed rules in future

(Happily, I don't anticipate too many more of these -- at least, the
space of -webkit-prefixed features is bounded, because implementors &
standards bodies have realized that vendor prefixes are bad for
web-compat, so they aren't being used for new features. The Chrome/Blink
team, at least, have committed to implementing new features behind their
equivalent of about:config prefs instead of vendor prefixes:
https://www.chromium.org/blink#vendor-prefixes )

> I think that it would be
> worthwhile providing developers with a more visible notice regarding
> their status.  Something like the deprecation warning we use for DOM
> APIs [1] could be useful.  Otherwise, I worry that the warnings will
> go unnoticed.

I'm not sure I agree. We discussed this a bit during a web-compat
session in MozLando, and there are several reasons not to bother with a
warning in this case (and note that these reasons do not apply to the
deprecated DOM API scenario that you brought up):

 (1) Dubious effectiveness: The existing web content where these
warnings would be *most* warranted -- content with webkit-prefixed CSS &
no fallback -- is (by definition) *exactly* the web content whose
developers are not bothering to test Firefox. So, any warning that we
add would have little chance of reaching that intended audience of
developers; it'd just add background noise to our users' error consoles.

 (2) Dubious usefulness: Given that these prefixed features will now
Just Work in Firefox, and given that we're saying they're de-facto part
of the web & committing to supporting them (and so are all other modern
browsers), then there's no clear benefit/motivation for web developers
to remove these from their sites. So, for web developers that *do* see
these warnings, it's not clear why they should care & address them (and
take time away from fixing other things).

 (3) False positives: There are many "legitimate" ways that authors can
use prefixed properties, and if we added a warning, we'd probably need
to exclude those cases. Some examples of "legitimate" use cases, which
would require some careful extra instrumentation to detect:

CSS with standards-based fallback after it:
.box {
   display: -webkit-box;
   -webkit-box-orient: horizontal;
   /* lots more -webkit-box stuff */
   display: flex;
   flex-direction: row;
   /* lots more modern-flexbox stuff */
 }

CSS with standards-based fallback in a completely different CSS rule
(not sure how often this happens, but it's conceivable):
.legacyBox {
  display: -webkit-box;
}
.modernBox {
  display: flex;
}
...


"@supports"-guarded conditions (where the author is explicitly checking
for browser support before using the legacy feature):
@supports (display: -webkit-box) {
  .foo { display: -webkit-box }
}

JavaScript that sets the prefixed style and modern style in separate
statements (i.e. separate passes through the CSS parser, so we have no
way of knowing that a standards-based version is coming up):
   elem.style.display = "-webkit-box";
   elem.style.display = "flex"; // use modern flexbox, if supported

Each of these "legitimate" scenarios would require a different set of
heuristics to skip the warning (and I'm not sure we'd be able to skip
the warning at all, for the 2nd and 4th cases).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement & ship: support for a subset of -webkit prefixed CSS properties & features

2015-12-31 Thread Daniel Holbert
On 12/31/2015 01:15 PM, Daniel Holbert wrote:
>  (1) Dubious effectiveness:
[...]
>  (2) Dubious usefulness: Given that these prefixed features will now
> Just Work in Firefox, and given that we're saying they're de-facto part
> of the web & committing to supporting them (and so are all other modern
> browsers), then there's no clear benefit/motivation for web developers
> to remove these from their sites. So, for web developers that *do* see
> these warnings, it's not clear why they should care & address them (and
> take time away from fixing other things).

(In retrospect, I should've titled (2) "Dubious benefits", instead of
"usefulness", to more clearly differentiate it from (1). Let's pretend I
did.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: support for a subset of -webkit prefixed CSS properties & features

2015-12-30 Thread Daniel Holbert
Summary:
  A good chunk of the web today (and particularly the mobile web)
effectively relies on -webkit prefixed CSS properties & features. We
wish we lived in a world where web content always included
standards-based fallback (or at least multiple-vendor-prefixed
fallback), but alas, we do not live in that world.  To be successful at
rendering the web as it exists, we need to add support for a list of
frequently-used -webkit prefixed CSS properties & features.
  Every other major modern browser engine implements support for these
aliases -- Blink & WebKit obviously have them, & Edge includes them for
compatibility.  (I'm not sure about IE's support, but it's not a
particularly important data point, given that Microsoft is focused on
Edge going forward.)

Bug tracking implementation:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1170789

Bug to enable pref:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1143147
 (Will likely land in the next few days.)

Link to standard:
  Mike Taylor is working on a WHATWG spec describing the -webkit
prefixed features that we believe are needed for web compatibility.
That spec lives here: http://compat.spec.whatwg.org/
  There's also been some discussion on the CSSWG mailing list about
updating official CSS specs to mention legacy -webkit aliases (and
discourage authors from using them), as discussed in this thread:
https://lists.w3.org/Archives/Public/www-style/2015Dec/0132.html

Platform coverage:
  All platforms.

Estimated or target release:
  Firefox 46 (current Nightly), or 47 if we need to hold it back a
release to fix things.

Preference behind which this will be implemented:
  layout.css.prefixes.webkit

Side note on earlier work:
  Earlier this year, in bug 1107378, we shipped an experimental JS-based
version of this feature, which was only active for a whitelist of sites
(all of which strongly depend on webkit prefixes for usability). This
experiment proved successful at making the whitelisted sites usable in
Firefox.  The new implementation (behind "layout.css.prefixes.webkit")
will supersede the older experimental JS-based implementation and will
not be whitelisted.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: RC4 disabled by default in Firefox 44

2015-09-14 Thread Daniel Holbert
On 09/01/2015 09:56 AM, Richard Barnes wrote:
> # Compatibility impact
> 
> Disabling RC4 will mean that Firefox will no longer connect to servers that
> require RC4.  The data we have indicate that while there are still a small
> number of such servers, Firefox users encounter them at very low rates.

One affected family of sites are American Airlines & US Airways.  I
can't load any of the following sites, and as a result I was unable to
checkin for some recent flights in Firefox Nightly:

  https://aa.com/
  https://www.aa.com/
  https://aavacations.com/
  https://www.usairways.com/
  https://checkin.usairways.com/

It may be (as mentioned in the text quoted above) that these sites are
visited at a low rate, but when users do need to visit them (e.g. for
flight checkin), it's pretty high-priority that they work.

Do we have tech evang plans to tell these sites & any other prominent
affected sites about the issue & pressure them to fix it? If they're not
even aware of the problem, and we end up being the first mover here
(say, because other browsers decide to punt for a release or two), I'd
hate for us to get the reputation as the Browser Which You Can't Rely On
To Check In For Your Flights.

(We've had tech evang bugs filed on American & US Airways for 6 months:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1141604
  https://bugzilla.mozilla.org/show_bug.cgi?id=1142703
but I'm not aware of any actual outreach that's happened, aside from me
sending American a twitter-ping, which more than likely ended up in
their /dev/null.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The War on Warnings

2015-06-04 Thread Daniel Holbert
On 06/04/2015 01:18 PM, smaug wrote:
 More likely we need to change a small number of noisy NS_ENSURE_* macro
 users to use something else,
 and keep most of the NS_ENSURE_* usage as it is.

I agree -- I posted about switching to something opt-in, like MOZ_LOG,
for some of the spammier layout NS_WARNINGS, too:

https://groups.google.com/forum/?fromgroups#!topic/mozilla.dev.tech.layout/YXauN50HDhI

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Private members of ref counted classes and lambdas

2015-06-04 Thread Daniel Holbert
On 06/04/2015 07:29 AM, Andrew Osmond wrote:
 Suppose I have some ref counted class Foo with the private member mBar.
 Normally with a lambda expression, [...]
 obviously the Foo object could get released before the dispatch
 completes

You may be interested in this thread from a few months back:
Proposal to ban the usage of refcounted objects inside C++ lambdas in
Gecko
https://groups.google.com/d/msg/mozilla.dev.platform/Ec2y6BWKrbM/xpHLGwJ337wJ

(Not sure it arrived at a concrete conclusion, but you may run across
some pitfalls/suggestions at least. I haven't used lambdas in C++, so I
won't attempt to directly answer your question.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The War on Warnings

2015-06-04 Thread Daniel Holbert
On 06/04/2015 05:30 AM, Ehsan Akhgari wrote:
 There are very good reasons for warnings to not cause tests to fail.  We
 have a lot of tests that are testing failure conditions that are
 expected to warn, because they are failure conditions.

Also: in layout, there are various warnings related to coordinate
wraparound/overflow, where we're basically throwing up our hands and
warning that broken layout is likely to occur because the page is
millions of pixels tall.  These can be useful hints, when debugging page
brokenness.

We have a lot of tests that exercise how we behave around this
coordinate-overflow threshold [mostly that we don't crash], and it's
expected that such tests would trigger these warnings.

One thing that might help for noise from these: Jesse filed a bug[1] a
while back on adding a flag to detect huge sizes  suppress some
assertions when we do; we could conceivably use this flag to suppress
warnings [perhaps just displaying a single detected huge sizes
warning], as well.

~Daniel

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=765861
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-06-02 Thread Daniel Holbert
On 06/02/2015 12:58 PM, smaug wrote:
 So, I'd like to understand why people think 'auto' is a good thing to use.
 (bz mentioned it having some use inside bindings' codegenerator, and
 sure, I can see that being rather
 valid case.)

One common auto usage I've seen is for storing the result of a
static_cast.  In this scenario, it lets you avoid repeating yourself
and makes for more concise code.  I don't think there's much danger of
fragility in this scenario (unlike your refcounting example), nor is
there any need for a reviewer/code-skimmer to do research to find out
the type -- it's still right there in front of you. (it's just not
repeated twice)

For example:
  auto concretePtr = static_castReallyLongTypeName*(abstractPtr);

Nice  concise (particularly if the type name is namespaced or otherwise
really long).  Though it perhaps takes a little getting used to.

(I agree that mixing auto with smart pointers sounds like a recipe for
fragility  disaster.)

~Daniel


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used

2015-05-08 Thread Daniel Holbert
On 05/07/2015 02:53 PM, Karl Tomlinson wrote:
 The warning that you are proposing to fix here is
 -Woverloaded-virtual.  [EDIT: karl meant to say 
 -Winconsistent-missing-override]
 
 At least once we can build with this warning enabled, I recommend
 making this warning fatal instead of covering over it by adding an
 override annotation that the author may have never intended.

Semi-tangent, to correct one premise here:

Good news -- we already *can  do* build with this warning
(-Winconsistent-missing-override) enabled, and it's fatal in
FAIL_ON_WARNINGS directories which is most of the tree.  bug 1117034
tracks a lot of the fixes for that.

You just need clang 3.6 or newer to get this warning, and our official
TreeHerder clang builders have an older version, so they don't report
this warning.  As a result, we get a few new instances checked in per
week -- but I've been catching those locally (since they bust my build)
and I've been fixing them as they crop up.

(But anyway, as ehsan replied separately, his proposed coding style
change isn't about fixing instances of this build warning.)

/semi-tangent
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-04 Thread Daniel Holbert
On 05/04/2015 09:39 AM, Florian Bösch wrote:
 Here is what I wrote that client:
 
 [...] For security reasons browsers want to disable fullscreen if you
 are not serving the website over HTTPS.

Are you sure this is true?  Where has it been proposed to completely
disable fullscreen for non-HTTPS connections?

(I think there's a strong case for disabling *persistent* fullscreen
permission, for the reasons described in ekr's response to you here.  I
haven't seen any proposal for going beyond that, but I might've missed it.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-04 Thread Daniel Holbert
Great!

Without getting too deep into the exact details about animation /
notifications / permissions, it sounds like Florian's concern RE
browsers want to disable fullscreen if you are not serving the website
over HTTPS may be unfounded, then.

(Unless Florian or Martin have some extra information that we're missing.)

~Daniel


On 05/04/2015 02:55 PM, Jet Villegas wrote:
 We're adding UX to clearly indicate http:// or https:// in fullscreen
 while still meeting the user desire for secure one-click-to-fullscreen.
 The latest and greatest proposal posted here:
 
 https://bugzilla.mozilla.org/show_bug.cgi?id=1129061
 
 --Jet
 
 On Mon, May 4, 2015 at 2:04 PM, Eric Rescorla e...@rtfm.com
 mailto:e...@rtfm.com wrote:
 
 On Mon, May 4, 2015 at 1:57 PM, Xidorn Quan quanxunz...@gmail.com
 mailto:quanxunz...@gmail.com wrote:
 
  On Tue, May 5, 2015 at 6:04 AM, Martin Thomson m...@mozilla.com 
 mailto:m...@mozilla.com wrote:
 
   On Mon, May 4, 2015 at 11:00 AM, Daniel Holbert dholb...@mozilla.com 
 mailto:dholb...@mozilla.com
   wrote:
(I think there's a strong case for disabling *persistent* fullscreen
permission, for the reasons described in ekr's response to you 
 here.  I
haven't seen any proposal for going beyond that, but I might've 
 missed
   it.)
  
   A little birdy told me that that is planned.
  
 
  I'm currently working on fullscreen. I believe our current plan is 
 neither
  disabling fullscreen on HTTP, nor disabling persistent permission of 
 that.
 
  Instead, we're going to remove permission bit on fullscreen, which means
  website can always enter fullscreen as far as that is initiated from a 
 user
  input. We plan to use some transition animation to make entering 
 fullscreen
  obvious for users, so that they are free from the burden deciding 
 whether a
  website is trustworthy.
 
 
 This is not what I gathered from the notes Richard Barnes forwarded me.
 Rather, I had the impression that we were going to make the
 animation more
 aggressive *and* require a permissions prompt every time for HTTP.
 
 Richard?
 
 -Ekr
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org mailto:dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Flexbox bug with absolutely positioned elements?

2015-04-29 Thread Daniel Holbert
On 04/29/2015 03:06 AM, Amit Zur wrote:
 http://jsfiddle.net/2ccvwjmr/1/
 Seems like DOM order has influence on the absolutely positioned element. I 
 don't think it's a desired behaviour.
 Did I do anything wrong? Can you verify if this is a bug? 

It's correct according to an older version of the spec; it's incorrect
according to the current spec.  (The spec language on abspos children of
flex containers was one of the last sections to stabilize.)

So, yup, it's a bug, specifically this one:
  https://bugzilla.mozilla.org/show_bug.cgi?id=874718
and I'm hoping to fix it in Q2 or Q3 of this year.

For now, I'd suggest avoiding depending on any exact behavior for abspos
flex items, because each rendering engine does something a bit different
(based on implementing different spec snapshots). (I've been told
IE.next, Spartan, implements the currently specced behavior, and
Chrome will be updating to the current spec soonish; they currently
implement some intermediate spec-language on this.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Java Deployment Kit block

2015-04-28 Thread Daniel Holbert
On 04/28/2015 04:16 PM, Dhon Buenaventura wrote:
 Hi There,
 
 The block placed on the Java Deployment Kit seems to affect other plugins
 such as Flash. I was using Nightly 64-bit as my web browser and have
 observed that in the past few days, Adobe Flash seems to not work even
 though I have it set to always enabled.

I don't think this has anything to do with Java. You're likely hitting
this bug:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1158270
At least, I saw similar issues (flash being aggressively blocked) over
the past few days, and that ended up being the bug I was hitting.

That particular bug is fixed as of the Nightly that went out this
morning. So, with any luck, your issue should be fixed if you update to
the latest nightly.

If you're still having trouble, please file a bug here:
https://bugzilla.mozilla.org/enter_bug.cgi?product=Corecomponent=Plug-ins

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used

2015-04-27 Thread Daniel Holbert
On 04/27/2015 12:48 PM, Ehsan Akhgari wrote:
 I think we should
 change it to require the usage of exactly one of these keywords per
 *overridden* function: virtual, override, and final.

Let me attempt to clarify why the exactly one requirement here is
important. (It's non-obvious.)

This verbose function-signature...
  virtual void foo() final override;
...can be expressed as the following, without removing any strictness:
  void foo() final

BUT, if we add back the virtual keyword, we *lose some strictness*. In
other words, this is *not* equivalent to the versions above:
  virtual void foo() final
Unless the formulations above, this version isn't guaranteed to override.

The keywords final override can be viewed as redundant, but only if
the virtual keyword is not present. This is because final implies
virtual, and virtualness-without-an-explicit-virtual-keyword implies
override. So final + lack-of-virtual-keyword implies override.

Hence the exactly one requirement that ehsan is proposing (which
Google has adopted).

 * It makes it easier to determine what kind of function you are looking at
 by just looking at its declaration.  |virtual void foo();| means a virtual
 function that is not overridden, |void foo() override;| means an overridden
 virtual function, and |void foo() final;| means an overridden virtual
 function that cannot be further overridden.

After discussing briefly with ehsan on IRC -- let me clarify the
language on this, since the word overridden is ambiguous here. (ehsan
was mostly using it to mean is an override of something else)

I'll restate the three categories, with less ambiguous language:
- |virtual void foo();| means a virtual function that is not an override
of a superclass method.
- |void foo() override;| means an overriding virtual function.
- |void foo() final;| means an overriding virtual function that cannot
be further overridden.

 * It will allow us to remove NS_IMETHODIMP, and use NS_IMETHOD instead.

(FWIW: This is because NS_METHODIMP expands to the exact same thing as
NS_IMETHOD, minus the 'virtual' keyword:
https://mxr.mozilla.org/mozilla-central/source/xpcom/base/nscore.h?rev=7e4e5e971d95mark=96-97#96
So if we were to drop 'virtual' from NS_IMETHOD -- as we probably would
want to if we adopt ehsan's proposal -- then NS_IMETHODIMP would have no
reason to exist.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Misunderstood the Assigned at bugs! Sorry !!!

2015-04-07 Thread Daniel Holbert
On 04/07/2015 01:15 PM, Tobias B. Besemer wrote:
 OK, to reopen this discussion ...
 
 I suggested in Bug 1151371 to activate the status IN_PROGRESS in bmo and 
 use this status for bugs that are in progress (patch in work) and that 
 everybody use the status applied in future only as taken or as in the 
 to-dos-list like the others do.
 My arguments for this and reasons can be found in the bug.

People already have inconsistent interpretations of what the bug
status field ASSIGNED vs NEW means (and inconsistent
levels-of-bothering-to-actually-tweak-the-flag). Adding an additional
IN_PROGRESS status will likely just make things more confusing --
particularly given that many people won't bother to set it, either
explicitly or accidentally.  (Why should they? It's extra process for
process's sake, and it'd arguably be a waste of their time.)

What is the problem you're trying to solve?  I think you're worried
about new contributors mistakenly thinking NEW means unassigned?  If
that's your concern, then (to the extent that's an actual problem) I
think it'd be better to focus on surfacing the assignee field, and
highlighting tools like Bugs Ahoy which these contributors should be
using in the first place.

I don't think adding an additional status (which will break existing
conventions  will be applied inconsistently) is going to help here.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: IndexedDB transactions are no longer durable by default, and other changes

2015-04-01 Thread Daniel Holbert
You've now sent 3 please unsubscribe me posts -- I don't think those
have any effect, aside from spamming everyone else on the list.

If you want to unsubscribe, you can do so via this link (which is
included at the bottom of every email you receive from this list):

  https://lists.mozilla.org/listinfo/dev-platform

Unsubscription instructions are available at the bottom of that page.

~Daniel


On 04/01/2015 06:39 PM, Alex Webster wrote:
 Please unsubscribe me
 On Wed, Apr 1, 2015 at 9:26 PM Jonas Sicking jo...@sicking.cc wrote:
 
 On Thu, Apr 2, 2015 at 3:00 AM, ben turner (bent)
 bent.mozi...@gmail.com wrote:
 On Wednesday, April 1, 2015 at 2:12:40 PM UTC-7, somb...@gmail.com
 wrote:
 - Crash-wise, are we talking about only the parent process crashing, or
 are we talking about the child process crashing too?

 I was talking just about the parent process. If the child process
 crashes then whether or not the transaction is durable depends on whether
 or not the parent received the commit message and processed it (otherwise
 we automatically abort any outstanding transactions). That behavior is
 unchanged.

 That doesn't sound entirely right.

 As soon as the child process received the last success event for a
 given request in a transaction, and we've sent the go ahead and
 commit message to the parent, then I would expect that it's ok for
 the child to crash at any point. This might match what you are saying.

 However the more important question that I believe Andrew is asking,
 is if we receive a commit event, what can then crash without there
 being a dataloss. My understanding is that at that point both the
 child and the parent can crash. As long as the kernel doesn't panic,
 or the battery runs out or is removed, the data will be written.

 I.e. once the commit event fires, the data has been transferred to
 the OS, and as long as the OS shuts down cleanly, or has time to
 flush, the data will be safe.

 1. Device shutdown via the UI

 This should be fine, we will flush to disk before actually powering down.

 2. User pulls the battery.
 3. Hardware-shutdown via long-hold of the power button.
 4. Device runs out of power.

 These three cases may fail to flush the data to disk, and if so when we
 restart the transaction will be rolled back.

 For 3: long-power-button-hold it seems like
 depending on how extensively things are wedged, we could notice at say 6
 seconds that we're headed for a shutdown and try to trigger an fsync and
 put a stop to new transactions.

 That sounds reasonable, assuming we get that notification at the gecko
 layer? I am not sure what kind of info we get when long-presses happen.

 I would think that when long-press happens, the CPU simply receives a
 reset signal. So it's equivalent to pulling the battery and putting it
 back.

 But maybe the OS is sent a signal a few seconds prior to that which
 would encourage it to flush. I'm not sure.

 For 4: low-power, ideally there's just
 a point that the device decides it's not safe to operate and shuts
 itself down, and that's slightly before it would result in IndexedDB
 badness.

 I don't know, but if it goes through normal shutdown then we will be
 fine.

 I'm pretty sure this results in the OS getting shut down cleanly, and
 so will flush any data it still holds. I.e. the data should be safe
 given my answer at the top.

 My main question is what is a realistic risk model for power loss/crash
 and are there mitigations we can put in place to reduce the risk to
 users?
 ...
 The issue there is how long the IndexedDB window of vulnerability is.

 Flushing happens automatically after the database is idle (i.e. no
 active transactions) for 2 seconds at present. If the database never goes
 idle for that long then we continue to journal until the WAL size exceeds
 20MB on desktop or 10MB on mobile.

 But prior to that flushing the data has been write()ten, right? So the
 OS will take care of flushing it eventually as long as it's shut down
 cleanly.

 / Jonas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: nsFrameList now supports range-based for loop

2015-03-31 Thread Daniel Holbert
On 03/31/2015 02:59 PM, Xidorn Quan wrote:
 I've landed bug 1143513
 https://bugzilla.mozilla.org/show_bug.cgi?id=1143513 which enables using
 range-based for loop on nsFrameList. Now, when you want to iterate
 nsFrameList, you no longer need nsFrameList::Enumerator. Just write:
 
 for (nsIFrame* frame : mFrames) { }

This is awesome! Goodbye to extra e.get() boilerplate.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Flexbox bug with word-wrap?

2015-02-08 Thread Daniel Holbert
Fixed fiddle:
 http://jsfiddle.net/eum5bxub/4/

Basically what happens here is:
 (1) The text gets wrapped in an anonymous block, which is the flex item.

 (2) We run the flex algorithm, with 150px to divy up. We *try* to
shrink that flex item, but we can't because it has (default)
min-width:auto, which prevents it from shrinking below its min-content
width (the width of the longest word).

 (3) So, the flex item ends up at its min-content width, which is larger
than the container -- it overflows.

To work around this, you need to:
 - Create an explicit flex item (just an extra layer of div) which you
can directly style, instead of using an anonymous flex item for the text.
 - style that flex item with min-width:0 to allow it to shrink below
its min-content size.

This doesn't affect Chrome, Safari,  Opera because they haven't
implemented min-width:auto, which means they don't stop the flex item
from shrinking in step (2) above.  Once they do implement it ( catch up
with the spec), you'll see the same behavior from them too.

Blink bug for that:
 https://code.google.com/p/chromium/issues/detail?id=426898

(There's probably a webkit version of that bug too but I don't have it
handy right now.)

~Daniel


On 02/08/2015 02:08 AM, Amit Zur wrote:
 Hi,
 
 See this fiddle:
 http://jsfiddle.net/eum5bxub/3/
 
 In firefox (version 35) the first box doesn't have wrapped text, but it 
 should.
 Chrome, Safari  Opera get it right.
 I'm on Mac OS X 10.10
 
 
 Did I do anything wrong? Can you verify if this is a bug?
 
 Thanks,
 Amit
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS display:contents

2015-02-04 Thread Daniel Holbert
Correct.

Conceptually, the div exists in the style tree (so e.g. your
text-align:center style gets inherited to the h1, via the style
system, and any other inherited properties or div  * { style rules
would affect the children as well).  But the div does not exist in the
box tree.  It's replaced there with its children (just the h1 in this
case).  So, the border doesn't show up, and the 'width'  'margin' also
have no effect since they're styling a box that doesn't exist.

~Daniel


On 02/04/2015 12:20 PM, cmzieba...@gmail.com wrote:
 I am trying to set up a CodePen to test this feature:
 
 http://codepen.io/WebDevCA/pen/vEeMjx
 
 If I understand display:contents properly then if it is working the green 
 border around the DIV can't be seen but the contents of the H1 inside that 
 DIV can be seen?
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests

2014-12-17 Thread Daniel Holbert
On 12/17/2014 01:27 PM, Martijn wrote:
 What about where setTimeout is used as a fallback for when some event
 failed to fire and the mochitest is stalled and the setTimeout is then
 used to finish the mochitest on time and give some useful debug info?

This exact scenario was called out in the initial message on this
thread, as a legitimate reasons to use setTimeout, and with instructions
on what to do. Quoting ehsan:

 On Fri, Dec 12, 2014 at 10:34 AM, Ehsan Akhgari ehsan.akhg...@gmail.com 
 wrote:
 * If you have a legitimate reason for using timeouts like this (such as if
 your test needs to wait a bit to make sure an event doesn't happen in the
 future, and there is no other way of checking that), you should call
 SimpleTest.requestFlakyTimeout(reason);.  The argument to the function is
 a string which is meant to document why you need to use these kinds of
 timeouts, and why that doesn't lead into intermittent test failures.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests

2014-12-17 Thread Daniel Holbert
Sorry -- after re-reading, I realized I was wrong here -- your example
scenario is actually different from the legitimate scenario I alluded to
in the first message of this thread.

The legitimate scenario from that first message was:
 - We're expecting that an event *will not* fire.
 - We wait a bit to see if it fires.
 - Fail if it fires before the timeout expires.

(Clearly, regardless of the timeout we choose  -- and unexpected delays
-- any failures here are real and indicate bugs.)

In contrast, your scenario is:
 - We're expecting that an event *will* fire.
 - We wait a bit to see if it fires.
 - Fail if it the event *does not* fire before the timeout expires.

Here, failures are iffy - they may or may not be real depending on
delays and whether the timeout was long enough.  This is precisely the
sort of thing that results in random-oranges (when e.g. new
test-platforms are added that are way slower than the system that the
test was developed on).

So, the idea now is that there should be a high threshold for adding the
second sort of test (ideally, we should just be using an event-listener
where possible). If a setTimeout is really needed for some reason in
this kind of scenario, the justification (and ratioale for why it won't
cause randomorange) will need to be explicitly documented in the test.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: object-fit object-position CSS properties

2014-11-14 Thread Daniel Holbert
As of sometime early next week (say, Nov 17th 2014), I intend to turn on
support for the object-fit  object-position CSS properties by default.

They have been developed behind the
layout.css.object-fit-and-position.enabled preference. (The layout
patches for these properties are actually just about to land; they're
well-tested enough [via included automated tests] that I'm confident
turning the pref on soon after that lands.)

Other UAs who are already shipping this feature:
  * Blink (Chrome, Opera)
  * WebKit (Safari) -- supports object-fit, but not object-position,
according to http://caniuse.com/#search=object-fit


This feature was previously discussed in this intent to implement
thread:
https://groups.google.com/forum/#!searchin/mozilla.dev.platform/object-fit/mozilla.dev.platform/sZx-5uYx6h8/dxcjm4yvCCEJ

Bug to turn on by default:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1099450
Link to standard:
  http://dev.w3.org/csswg/css-images-3/#the-object-fit
  http://dev.w3.org/csswg/css-images-3/#the-object-position
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: object-fit object-position CSS properties

2014-11-14 Thread Daniel Holbert
Here's the intent to ship thread, for reference:
https://groups.google.com/forum/#!topic/mozilla.dev.platform/DK_AyuGfFhg
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: object-fit object-position CSS properties

2014-11-14 Thread Daniel Holbert
On 11/14/2014 05:30 PM, Kyle Huey wrote:
 Does it make sense to wait a release (meaning one week on trunk) here?
  Not judging this, just making sure you're aware of the dates.

Thanks -- yup, I'm aware that we're branching soon.

I don't think we'd gain much by holding off on this until the next
release cycle (giving it 6 weeks of preffed-on baking on Nightly,
instead of 1 week).  The spec is stable, the feature is pretty
straightforward  includes a large automated test-suite, so there's low
risk for regressions  spec-changes.  (And it'll get more/better testing
on Aurora/Beta than it'd get on Nightly, too. And it's easy enough to
turn off, in the unlikely event that do we find out it's not ready for
shipping to our release users.)

Moreover, other UAs are already shipping this, and authors want it, so
it's better to get this on track to ship sooner rather than later.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building Firefox install

2014-11-10 Thread Daniel Holbert
On 11/10/2014 01:44 AM, Josip Maras wrote:
 How can I build a normal, standard Firefox installer for Windows,
 like the one distributed to standard Firefox users?

I don't know the answer to your specific question (I've never personally
had to build the installer), but just as a heads-up: you can't legally
*distribute* a modified Firefox build, using the official
branding/trademarks, unless you've gotten explicit permission.  See
Modifications section here:
  https://www.mozilla.org/en-US/foundation/trademarks/policy/

Hopefully you already know this  your build is just for personal use 
not for distribution to others. :)

Anyway, good luck with your build issue.

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Flex issue in Firefox Beta

2014-11-05 Thread Daniel Holbert
This is known/expected fallout from a spec change. See
https://bugzilla.mozilla.org/show_bug.cgi?id=1043520 for other trouble
that it's caused. :-/

tl;dr: you need to set min-height:0 on the 'section' to get the
behavior you want. Here's the fixed version:

http://jsfiddle.net/vyhf2rzL/2/

Basically: flex items are now, by default, given a minimum main-size
(height in this case) of their min-content size (their auto-height, in
this case).  In your case, that is the *full* height of the tall list.
The flex item won't shrink below this minimum size, unless you disable
it by explicitly setting min-height.

The spec tries to make this do the right thing with
overflow:auto/scroll/hidden content by *disabling* this min-size
behavior on flex items that have overflow set. But in your example,
the (tall) flex item does *not* have overflow set.  Its *child* has
overflow set.  So we still do the min-content min-sizing behavior (as
the spec requires).

I brought this scenario (with overflow being set on a child of a flex
item) up on the www-style list, since it's the most common pitfall we've
run into with this spec-change breaking content:
  http://lists.w3.org/Archives/Public/www-style/2014Jul/0589.html
...but unfortunately there's not a good solution for it; it seems that
authors just have to know to set min-height:0 (or min-width:0) in
this case.

~Daniel


On 11/05/2014 08:02 AM, sendwithch...@gmail.com wrote:
 Hi,
 
 The following jsFiddle shows inconsistency between FF versions 33 and 34.
 In version 34, the content is not scrollable. In 33 it is.
 
 http://jsfiddle.net/vyhf2rzL/1/
 
 Is this a bug in FF 34?
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-10-17 Thread Daniel Holbert
On 09/25/2014 10:29 AM, Ehsan Akhgari wrote:
 I don't see this temporary difference as particularly problematic,
 particularly given that pixelated is primarily an upscaling feature,
 and given that we'll match them before too long.  But if others
 disagree, I'm open to holding off on shipping image-rendering:
 pixelated until bug 1072703 is fixed.
 
 I would really prefer if we ship something interoperable with Chrome, so
 unless bug 1072703 is a very large project, I don't think we should ship
 support for pixelated without it.

Just to follow up on this:

I'm leaning towards of conservatism here  following ehsan's advice to
not ship pixelated until we've got downscaling-detection implemented,
for prettier downscaling with image-rendering: pixelated. (bug 1072703)

Unfortunately, it turned out that this is a less-trivial project than I
was hoping -- it requires changes to all of our drawing paths, in all of
our per-platform gfx/2d/DrawTarget{$WHATEVER} files.  It probably won't
be a ton of code, but it requires a good deal of testing/tweaking, on
every platform, to find all the paths that need adjusting, and to find
the right rects/transforms to inspect in each chunk of drawing code.
(And it likely requires some refactoring in these files, to share these
checks among the various drawing paths.)

So, I'm de-prioritizing work on pixelated for now, and I'm focusing on
finishing object-fit  object-position instead.  I hope to circle
back to finish of pixelated before too long, though.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: object-fit object-position CSS properties

2014-10-09 Thread Daniel Holbert
On 10/09/2014 02:39 AM, yio...@gmail.com wrote:
 Which version of the official opening?

It's looking like object-fit  object-position will be released in
Firefox 36, if that's what you're asking.

You can follow along on the bug page:
 https://bugzilla.mozilla.org/show_bug.cgi?id=624647

This is currently blocked on image-rendering: pixelated support, which
is required for good, interoperable regression-tests for this feature.
That'll be done soon; I'm working on that feature here:
 https://bugzilla.mozilla.org/show_bug.cgi?id=856337
 https://bugzilla.mozilla.org/show_bug.cgi?id=1072703

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-25 Thread Daniel Holbert
On 09/25/2014 09:16 AM, Ehsan Akhgari wrote:
 No, sorry for not being clear, I didn't mean pixel for pixel identical
 results.  My question was: are we going to have the same behavior for
 pixelated in the downscaling case, since now the spec allows two
 different behaviors for that case.

Gotcha.

Once the followup bug 1072703 is fixed, yes, we'll have effectively
the same downscaling behavior as Chrome, with
image-rendering:pixelated. (we'll both match our respective default
downscaling behaviors)

Before that (i.e. if we just take bug 856337), we won't -- we'd do
nearest-neighbor for downscaling, and they'll do their default thing.

I don't see this temporary difference as particularly problematic,
particularly given that pixelated is primarily an upscaling feature,
and given that we'll match them before too long.  But if others
disagree, I'm open to holding off on shipping image-rendering:
pixelated until bug 1072703 is fixed.

(I don't think that waiting on that bug is worthwhile... In the
meantime, authors who want a pixelated look (and want to support
Firefox) are going to have to use -moz-crisp-edges instead, which
means prefixed CSS will be propagating on the web, which is undesirable.)

(side note: I'm keen on supporting pixelated in the near term because
I need it in a pile of reftests that I'm writing for object-fit and
object-position, which will live in our directory of
reftests-that-get-upstreamed-to-the-w3c. As it stands right now, I'll
have to use -moz-crisp-edges in those reftests in order to get
reftestable image-upscaling behavior; but I'd rather use pixelated as
a single standardized supported-in-multiple-engines keyword, since these
tests are destined for an upstream non-moz testsuite.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-25 Thread Daniel Holbert
On 09/25/2014 08:24 AM, James Graham wrote:
 So, are we sure that this is what the spec *should* say? can we imagine
 a scenario in which authors either use hacks to specify different
 properties for different browsers

Bad news: we are already in that world. Right now, if authors want
pixelated upscaling instead of smeared upscaling, we make them do the
following horrible horrible thing:

 image-rendering: -moz-crisp-edges; /* Firefox */
 image-rendering: -webkit-optimize-contrast; /* Safari */
 image-rendering: pixelated; /* Chrome (soon) */
 -ms-interpolation-mode: nearest-neighbor; /* IE */

Yikes. Let's make that better.

 or sites look considerably worse in
 Gecko than some other browser, so we end up reverse engineering their
 behaviour, due to differences in the scaling behaviour?

We've had bugs filed like this in the past, and these bugs give us an
opportunity to think about whether we should switch to a different
scaling algorithm for that content/image/platform/etc.  If we can,
perhaps we should.  This is a good thing.

Note that if the spec *required* a specific algorithm for interop, then
we wouldn't be allowed to improve on a gross rendering in cases like
this (unless we want to violate the spec). I don't think we want to ask
that the spec tie our hands like that.

 If either of
 these seems likely then the spec likely shouldn't claim to be a hint
 but should actually specify what's needed for interop.

The CSS and HTML specs *already* don't require any particular algorithm
be used for image-scaling (AFAIK). And that's a good thing, because it
lets us make an educated decision given the capabilities of a device,
the type of image, etc., and make improvements when we realize that
we're making the wrong tradeoff.

I think the image-rendering spec strikes a reasonable balance between
allowing authors to specify specific intents, without tying browsers'
hands to lock them into a specific implementation.

(And in any case, this thread is about just adding a single value to our
already-existing image-rendering property.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-24 Thread Daniel Holbert
On 09/24/2014 07:38 AM, Ehsan Akhgari wrote:
 This makes the implementation considerably simpler, which is great.  It
 also means that pixelated will essentially just be a
 more-interoperable version of -moz-crisp-edges, for the time being.
 
 So, what are we planning to do with -moz-crisp-edges?

I'm not aware of any specific plans to change it at the moment.

 I think keeping it in its current form may be pointless (unless if we
 know this is something that the Web depends on?)

It's unclear to me whether the web depends on it.  As a proxy, we use it
in 16 places within /browser.

Moreover, note that this behavior is *only* available using per-browser
prefixed keywords; so I doubt authors have unprefixed fallback at this
point.  So, unprefixing or removing -moz-crisp-edges would likely
break content at this point. (I'll bet authors will start including
unprefixed pixelated soon, though, because that'll be the only way to
get this behavior in Chrome, once it ships there.)

 If I'm reading the
 spec correctly, we can actually unprefix it and make it equivalent to
 pixelated, but I'm not sure how valuable that is.

We probably do want to eventually unprefix -moz-crisp-edges (and as you
say, we *could* do so now), but if we're planning to tweak which
algorithm we use (unclear), it might be wise to wait until we've done
that before unprefixing.

 I think that this is
 allowed by the spec though, so perhaps it should be modified to say how
 crisp-edges must be different than pixelated.

They don't have to be different. As Tab said on the Blink
intent-to-implement thread:

  Having crisp-edges act like pixelated is an
  allowed implementation strategy.  It's also allowed,
  though, to be smarter when doing crisp-edges, and
  use an intelligent pixel-scaling algorithm, of
  which there are many.

  pixelated was added by request of multiple users, who sometimes
  literally want the big pixel look of plain nearest-neighbor
  interpolation.

https://groups.google.com/a/chromium.org/d/msg/blink-dev/Q8N6FoeoPXI/hoECmv_OUkYJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-24 Thread Daniel Holbert
On 09/24/2014 09:32 AM, L. David Baron wrote:
 Or, alternatively, it seems like the use case here would be 
 addressed by doing what the spec said before.

Following up more on this: the CSSWG has now resolved to *allow* (but
not require) the formerly-required-by-spec prettier downscaling
behavior, per the first RESOLVED at the top of
 http://lists.w3.org/Archives/Public/www-style/2014Sep/0384.html

I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1072703 to cover
this.

Given that the main use-cases for image-rendering:pixelated are for
*upscaling*, I don't think the optional-better-downscaling-work should
block us from shipping a straightforward ( spec-compliant)
nearest-neighbor implementation for pixelated.

We can add better-downscaling logic separately, in the followup bug --
though it may even arrive in the same release where we ship pixelated
(or if not that, soon after), since as noted in my other repsonse to
dbaron, it's not *too* much work.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-24 Thread Daniel Holbert
On 09/24/2014 06:26 PM, Ehsan Akhgari wrote:
 Hmm, doesn't that basically allow non-interoperable implementations? :(
  I think Jonas' idea on having separate properties for the upscale vs.
 downscale cases is much better.

I'm unconvinced about the usefulness of exposing that much control. This
is something that could be added to CSS Images level 4, though, if you
really thing it merits doing.  I suspect CSS Images level 3 is too late
in the game for that sort of change.

 So, what's Chrome's position with regards to this spec churn?

As of this morning, their intent to ship thread had a few posts
sounding like they'd take out the smart-downscaling eventually, to align
with the ED as-it-stood-this-morning.

But now that prettier downscaling is allowed, I assume they'll stick
with what they've already implemented (which includes the default
downscaling behavior).

 Is what
 they're going to ship in Chrome 38 going to be interoperable with our
 implementation?

It depends on what you mean by interoperable.  If you're asking if
they'll produce the exact same result, pixel-for-pixel, when downscaling
an image, then no.  But that's likely already the case, with the default
scaling behavior; I'd be surprised if we matched them 100% on
image-downscaling.

Also, stepping back a bit w.r.t. the interoperability of the
image-rendering property -- this property is explicitly a _hint_, to
express author intent. Its description in the spec starts like this:

 # The image-rendering property provides a hint
 # to the user-agent about what aspects of an
 # image are most important to preserve when the
 # image is scaled, to aid the user-agent in the
 # choice of an appropriate scaling algorithm.
http://dev.w3.org/csswg/css-images-3/#propdef-image-rendering

So, it's not required to behave exactly the same everywhere; it simply
codifies an author's intent.  (OK, I suppose it *is* required to behave
exactly the same everywhere in the case of pixelated  upscaling,
since that requires a particular algorithm, to achieve a particular
effect. But other than that, it's purely a hint.)

So, I don't think pixel-for-pixel-identical is a level of
interoperability that's required or expected for this property, in general.

Also, note that once followup-bug-1072703 is fixed, we'll be as
interoperable with Chrome as our default downscaling behavior is with
theirs. (which is probably pretty close, though I suspect not
pixel-for-pixel identical.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-24 Thread Daniel Holbert
On 09/24/2014 09:23 PM, Daniel Holbert wrote:
 So, it's not required to behave exactly the same everywhere; it simply
 codifies an author's intent.  (OK, I suppose it *is* required to behave
 exactly the same everywhere in the case of pixelated  upscaling,
 since that requires a particular algorithm, to achieve a particular
 effect. But other than that, it's purely a hint.)

(And actually, even in the case of pixelated  upscaling, the spec just
requires nearest-neighbor _or similar_.  So, [as it goes on to say],
the spec does not dictate any particular scaling algorithm to be used.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
Summary: The CSS declaration image-rendering: pixelated allows authors
to request that we scale up images by effectively making the pixels
larger (using a nearest-neighbor algorithm).  This is in contrast to
the default (non-pixelated) scaling behavior, which tends to blur the
edges between an image's pixels when upscaling.  The default behavior is
appropriate for use-cases like photos, but authors may prefer a
pixelated look e.g. when scaling up pixel-art or favicons.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=856337

Link to standard:
http://dev.w3.org/csswg/css-images/#valuedef-pixelated

Platform coverage: All

Estimated or target release: Firefox 35 or 36

Preference behind which this will be implemented:
None. This is a small, targeted feature; if we need to disable it for
some reason, we can easily do so with a small change to the CSS parser
(just removing the new keyword from the keyword-table for this property).

NOTES:
 - Blink has already implemented pixelated[1] and it'll be shipping[2]
in Chrome 38 [3].  So, if  when we ship it, there will be
interoperability between at least 2 engines here. (Other rendering
engines expose similar behavior, albeit under different non-standard
keyword-names.)
 - Gecko already has image-rendering: -moz-crisp-edges as a way to
request this behavior; the difference is that -moz-crisp-edges uses
the same algorithm (nearest-neighbor) regardless of whether it's
upscaling or downscaling, whereas pixelated is supposed to *only* use
that algorithm for upscaling, and use the default (auto) behavior when
downscaling.

~Daniel

[1] Blink intent-to-implement:
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/Q8N6FoeoPXI
[2] Blink intent-to-ship:
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/zSasd2LL8Mc
[3]
http://blog.chromium.org/2014/08/chrome-38-beta-new-primitives-for-next.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 02:16 PM, Ehsan Akhgari wrote:
 Why are upscaling and downscaling treated differently for pixelated?

I'm not entirely sure what the origin of that distinction is, but my
understanding (mostly from reading Tab's comments/responses on the Blink
intent-to-implement thread) is that Nearest-Neighbor Scaling really
doesn't look pixelated at all when scaling down, so authors asking for
a pixelated look really don't want nearest-neighbor downscaling.  The
default scaling algorithm will do a better job of downscaling your image
in a non-datalossy-way than nearest-neighbor would.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 02:39 PM, Daniel Holbert wrote:
 On 09/23/2014 02:16 PM, Ehsan Akhgari wrote:
 Why are upscaling and downscaling treated differently for pixelated?
 
 I'm not entirely sure what the origin of that distinction is, but my
 understanding (mostly from reading Tab's comments/responses on the Blink
 intent-to-implement thread) is that Nearest-Neighbor Scaling really
 doesn't look pixelated at all when scaling down

FWIW, I also emailed www-style to sanity-check my understanding  to see
if there are any other reasons for this behavior-difference:
 http://lists.w3.org/Archives/Public/www-style/2014Sep/0340.html

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 02:56 PM, Daniel Holbert wrote:
 FWIW, I also emailed www-style to sanity-check my understanding  to see
 if there are any other reasons for this behavior-difference:
  http://lists.w3.org/Archives/Public/www-style/2014Sep/0340.html

Turns out there wasn't a strong reason for the difference; Tab's now
updated the ED to remove the requirement that we match auto when
downscaling.

This makes the implementation considerably simpler, which is great.  It
also means that pixelated will essentially just be a
more-interoperable version of -moz-crisp-edges, for the time being.

(Down the line, we might want to change crisp-edges to use a different
scaling algorithm, and then they wouldn't be aliases anymore. The spec
allows us flexibility in choice of algorithm for crisp-edges [and
there are other edge-preserving algorithms like hqx that could be
better].  pixelated is required to stick with nearest-neighbor, though.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 04:24 PM, Jonas Sicking wrote:
 Would it make sense to have separate properties for scale up and
 scale down? With image-rendering being a shorthand for setting both?

Firstly: per my replies on the subthread started by ehsan, the
distinction in scale up vs. scale down behavior has (just now!) been
removed from the spec. It turns out that it was a somewhat arbitrary choice.

I suspect that removes the motivation for your question, but just in
case it doesn't: I don't think see a strong use-case for an author to
request specific  different scale-up vs. scale-down behaviors --
particularly given that there are only a few image-rendering options
anyway, and only one of them (pixelated) is specced with
actually-reliable results. (The others are basically
whatever-the-rendering-engine-wants-to-do, under these rough
guidelines.)   If an author really wants different behavior for scaling
up vs. scaling down, they can already get their desired result via media
queries and/or JS.

 Separately, isn't image-rendering a bit too generic of a name for
 setting scaling strategy?

Perhaps, but that ship has sailed :) This property was originally part
of SVG, and there, I think the scaling aspect was just implied. :)
(since it's already in the SVG name)

If it didn't already exist in SVG, and the CSSWG were creating this
property from scratch, they probably would have picked a different name
that included some mention of scaling; but here we are.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 04:38 PM, Daniel Holbert wrote:
 On 09/23/2014 04:24 PM, Jonas Sicking wrote:
 Would it make sense to have separate properties for scale up and
 scale down? With image-rendering being a shorthand for setting both?
 
 Firstly: per my replies on the subthread started by ehsan

(Oh, I guess you were *responding* to that subthread :) I didn't
initially see that, since I hadn't yet bumped this to my threaded
dev.platform folder. Anyway, I don't see a compelling motivation to go
for separate downscaling vs. upscaling prefs separately, at this point.
It's possible the property could be split in the future, though.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 10:08 PM, Martin Thomson wrote:
 On 2014-09-23, at 13:53, Daniel Holbert dholb...@mozilla.com wrote:
 
 Link to standard:
 http://dev.w3.org/csswg/css-images/#valuedef-pixelated
 
 Reading the spec it doesn’t say anything about what to do when the image is 
 scaled up on one axis and down on the other.  It’s probably not a 
 particularly valid use case, but I’d expect there to be at least something on 
 the subject.  The image sizing examples in the same document actually 
 demonstrate this exact case, odd as it might seem.
 

Yup, good question.

So, three late-breaking updates from ~today that address this:

 1) Tab moved this property-value to the CSS Images Level 3 spec in the
last day or so, so I believe the canonical definition is now here:
 http://dev.w3.org/csswg/css-images-3/#valdef-image-rendering-pixelated

(This is a *slightly* different URL than the one I gave in my initial
intent to implement post -- note the -3 in css-images-3. Sorry for
the confusion.)

 2) I posted to www-style asking basically your exact question earlier
today (following up on a thread from Simon Sapin), and Tab accepted
Simon's proposal to relax the language  make it only care if at least
one axis is being upscaled, and otherwise do the auto behavior.

 3) Later on today, in response to ehsan's question on this thread, I
asked if there was any strong reason to have the upscaling/downscaling
behavior-difference in the first place; there was not, so the
distinction was removed altogether.

So the current spec text is simply:
  # pixelated
  #The image must be scaled with the
  # nearest neighbor or similar algorithm,
  # to preserve a pixelated look as the image
  # changes in size.
from here:
http://dev.w3.org/csswg/css-images-3/#valdef-image-rendering-pixelated

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 01:53 PM, Daniel Holbert wrote:
 Link to standard:
 http://dev.w3.org/csswg/css-images/#valuedef-pixelated

As noted elsethread (in my response to Martin), it looks like the
canonical definition of this property-value is actually in a different
ED -- the level 3 ED. (whereas the link above is currently the level
4 ED).

The corrected link is:
http://dev.w3.org/csswg/css-images-3/#valdef-image-rendering-pixelated

(Both versions have spec-text on this property-value, and until this
morning, I believe the spec-text was the same; but there have been a few
small changes in response to my questions on www-style today, and those
changes have happened in the corrected URL for the level-3 spec -- not
the URL I'd originally posted for the level-4 spec.)

Thanks,  sorry for any confusion caused by this.
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: image-rendering: pixelated CSS property-value

2014-09-23 Thread Daniel Holbert
On 09/23/2014 10:30 PM, Daniel Holbert wrote:
 As noted elsethread (in my response to Martin), it looks like the
 canonical definition of this property-value is actually in a different
 ED -- the level 3 ED. (whereas the link above is currently the level
 4 ED).

(This change -- moving the image-rendering property between
spec-levels -- only happened yesterday[1] which is why I hadn't noticed
it until after sending my intent-to-implement post.)

[1] http://lists.w3.org/Archives/Public/www-style/2014Sep/0301.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: object-fit object-position CSS properties

2014-09-10 Thread Daniel Holbert
Summary: The 'object-fit' and 'object-position' properties allow web
developers to customize how a replaced element's content gets scaled and
positioned to fit the element's content-box. (i.e. how an image or a
video gets scaled/positioned inside of an img/video tag) The
'object-fit' property lets authors request e.g. 'contain' or 'cover'
behavior (or several other behaviors), and 'object-position' lets the
them specify how the content should be aligned when there's extra space
available.  Together, these properties provide similar functionality to
the preserveAspectRatio attribute in SVG.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=624647

Link to standard:
  http://dev.w3.org/csswg/css-images-3/#the-object-fit
  http://dev.w3.org/csswg/css-images-3/#the-object-position

Platform coverage: All

Estimated or target release: Firefox 35

Preference behind which this will be implemented:
 layout.css.object-fit-and-position.enabled


NOTE: Last night, I landed preffed-off support for these properties,
*just in CSS* (from bug 1055285) -- i.e. layout doesn't make use of them
yet. (So, in tomorrow's nightly with the pref toggled, the properties
can be parsed and can be inspected via getComputedStyle(), but they have
no effect on rendering for now.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: object-fit object-position CSS properties

2014-09-10 Thread Daniel Holbert
On 09/10/2014 05:26 PM, Jonas Sicking wrote:
 Yes!
 
 Do we have a sense for how supportive other browser vendors are of
 these properties?

Supportive! I haven't tested other browsers' implementations yet, but I
do know that it's been implemented in Blink, and it was apparently
undergoing code-review in WebKit a year ago, so it's probably fixed
there too, by now.  More details here:
http://lists.w3.org/Archives/Public/www-style/2013May/0536.html

Note the No info on Microsoft there -- several months later in that
thread, a Microsoft representative said they haven't committed to its
implementation[1], but in a later message, they said We don't have a
problem with the properties.  From our early evaluation, the approach
makes sense.  If the other implementers don't have objections to
un-prefixing these properties, neither do we.[2]

[1] http://lists.w3.org/Archives/Public/www-style/2013Oct/0005.html
[2] http://lists.w3.org/Archives/Public/www-style/2013Oct/0142.html

 And how stable we can expect these properties to be
 in the spec?

Pretty stable, I think. There's been no mention of the properties in
subject-lines on www-style since that ^ linked thread finished up in
October of last year (aside from one suggestion of an additional
object-fit value, which didn't get a response).  I take that as an
indication that things aren't really changing.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-07-10 Thread Daniel Holbert
On 07/10/2014 02:48 AM, Neil wrote:
 Except for refcounted base classes (which as you note need to have a
 protected virtual destructor), is there a correct style as to whether
 the destructor should be private or protected and virtual or nonvirtual?

IMO, the destructor should be nonvirtual, since we're only expecting to
be destroyed via the concrete class's ::Release() method.  So we
shouldn't need a virtual destructor.  (I'm not sure how best to *ensure*
that we're never destroyed via delete pointerOfSuperclassType
though... We could give all of the superclasses protected destructors,
which would help, but not be 100% foolproof.)

As for protected vs. private --  in the concrete class, I don't think it
particularly matters, since the keywords have the same effect, as long
as the class is MOZ_FINAL.

IIRC, when I was actively working on this cleanup, if we already had an
already-existing protected section like below...

class Foo MOZ_FINAL : public Bar {
 public:
   [...]
 protected:
   // overrides of protected methods from superclass
   // (in section labeled as protected for consistency w/ superclass)
   [...]
};

...then I just used the existing protected area, for conciseness,
rather than adding a new private area.  But if there wasn't any
consistency-with-my-superclass argument to be made for using
protected, then I'd just use private, to avoid the misleading
implication that we have subclasses which can see *our* protected stuff.

 First, if your class is abstract, then it shouldn't have
 AddRef/Release implementations to begin with.  Those belong on the
 concrete subclasses -- not on your abstract base class.

 What's correct code for abstract class Foo (implementing interfaces
 IFoo1 and IFoo2) with derived classes Bar and Baz (implementing
 interfaces IBar1 and IBar2 or IBaz1 and IBaz2 respectively)?

Shouldn't the refcounting still be on the concrete classes? Or does that
not work for some reason?  Based on your question, I'm guessing it
doesn't. Anyway, I'm not sure what's supposed to happen there. :)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-07-10 Thread Daniel Holbert
On 07/10/2014 08:03 AM, Benjamin Smedberg wrote:
 On 7/10/2014 10:46 AM, Daniel Holbert wrote:
 Shouldn't the refcounting still be on the concrete classes?
 Why?
 
 This happens for example with nsRunnable: nsRunnable does the
 refcounting, and all the derivations of nsRunnable only have to worry
 about overriding Run because there's a virtual destructor.

Oh, good point -- the refcounting can go on an abstract base class, if
that base class has a virtual destructor.

So, I retract my if your class is abstract, then it shouldn't have
AddRef/Release implementations to begin with statement from the initial
email on this thread.  (I think the rest of my BUT WHAT IF I NEED
SUBCLASSES section from that message still applies, but just with the
abstract/concrete distinction removed.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing early any changes on the try server and the exact build envs

2014-06-04 Thread Daniel Holbert
On 06/03/2014 07:32 AM, Gabor Krizsanits wrote:
 Currently m-c does not build with gcc 4.6 on ubuntu because something 
 similar. After
 updating to 4.8 I got some warning in webrtc code, so I had to turn off 
 warning-as-errors.

FWIW, you can work around that with ac_add_options --disable-webrtc.

I've filed bug 1020643 and bug 1020661 on addressing the build issues
you mentioned, though.

(I've used GCC 4.8 and warnings-as-errors for a long time, and I now
build with GCC 4.9, and it's mostly painless. :) I do have a bunch of
--disable-XYZ in my .mozconfig, including disable-webrtc, which is why
I hadn't noticed the issue you mentioned here until today.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing early any changes on the try server and the exact build envs

2014-06-04 Thread Daniel Holbert
On 06/03/2014 04:25 PM, Mike Hommey wrote:
 As for warning-as-errors, it's not meant to be used for local builds,
 because different compilers don't come with the same set of warnings.

I think that might be putting it a bit too strongly.  Warnings-as-errors
absolutely *is* meant to be used with local builds (so that you'll be
able to catch most issues locally instead of as build failures on Try or
Inbound).

You're very much right about this caveat, though:
 when compiling with a compiler that doesn't match what's used on
 automation, yes, you risk hitting new warnings, and failing to build as
 a consequence.

(This is why it's encouraged, but not the default behavior.)

This risk is fairly low if you're using a commonly-used (albeit
not-on-TBPL) compiler, because that makes it more likely that other devs
will have caught  fixed any potential issues before you hit them.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Daniel Holbert
Hi dev-platform,

PSA: if you are adding a concrete class with AddRef/Release
implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
of the following best-practices:

 (a) Your class should have an explicitly-declared non-public
destructor. (should be 'private' or 'protected')

 (b) Your class should be labeled as MOZ_FINAL (or, see below).


WHY THIS IS A GOOD IDEA
===
We'd like to ensure that refcounted objects are *only* deleted via their
::Release() methods.  Otherwise, we're potentially susceptible to
double-free bugs.

We can go a long way towards enforcing this rule at compile-time by
giving these classes non-public destructors.  This prevents a whole
category of double-free bugs.

In particular: if your class has a public destructor (the default), then
it's easy for you or someone else to accidentally declare an instance on
the stack or as a member-variable in another class, like so:
MyClass foo;
This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
(say, if some function that we pass 'foo' or 'foo' into declares a
nsRefPtr to it for some reason), then we'll get a double-free. The
object will be freed when the nsRefPtr goes out of scope, and then again
when the MyClass instance goes out of scope. But if we give MyClass a
non-public destructor, then it'll make it a compile error (in most code)
to declare a MyClass instance on the stack or as a member-variable.  So
we'd catch this bug immediately, at compile-time.

So, that explains why a non-public destructor is a good idea. But why
MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
route to trigger the same sort of bug -- someone can come along and add
a subclass, perhaps not realizing that they're subclassing a refcounted
class, and the subclass will (by default) have a public destructor,
which means then that anyone can declare
  MySubclass foo;
and run into the exact same problem with the subclass.  A MOZ_FINAL
annotation will prevent that by keeping people from naively adding
subclasses.

BUT WHAT IF I NEED SUBCLASSES
=
First, if your class is abstract, then it shouldn't have AddRef/Release
implementations to begin with.  Those belong on the concrete subclasses
-- not on your abstract base class.

But if your class is concrete and refcounted and needs to have
subclasses, then:
 - Your base class *and each of its subclasses* should have virtual,
protected destructors, to prevent the MySubclass foo; problem
mentioned above.
 - Your subclasses themselves should also probably be declared as
MOZ_FINAL, to keep someone from naively adding another subclass
without recognizing the above.
 - Your subclasses should definitely *not* declare their own
AddRef/Release methods. (They should share the base class's methods 
refcount.)

For more information, see
https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
this sort of thing in a bunch of existing classes.  I definitely didn't
catch everything there, so please feel encouraged to continue this work
in other bugs. (And if you catch any cases that look like potential
double-frees, mark them as security-sensitive.)

Thanks!
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Daniel Holbert
Nice!  So it sounds like we could use std::is_destructible to harden our
refcounting-impl macros (like NS_INLINE_DECL_REFCOUNTING), by having
those macros include a static_assert which enforces that they're only
invoked by classes with private/protected destructors.

(I'll bet that this static_assert wouldn't transfer to subclasses - e.g.
if class Foo has NS_INLINE_DECL_REFCOUNTING and Foo has a subclass Bar,
I'll bet the static_assert would only check Foo, and not Bar.
Fortunately, it's rare to have concrete refcounted classes with
subclasses, so this shouldn't be a huge source of trouble.)

For now, our code isn't clean enough for this sort of static_assert to
be doable. :-/ And we have at least one instance of a refcounted class
that's semi-intentionally (albeit carefully) declared on the stack:
gfxContext, per https://bugzilla.mozilla.org/show_bug.cgi?id=742100

Still, the static_assert could be a good way of finding (in a local
build) all the existing refcounted classes that want a non-public
destructor, I suppose.

~Daniel

On 05/28/2014 01:51 PM, Benoit Jacob wrote:
 Awesome work!
 
 By the way, I just figured a way that you could static_assert so that at
 least on supporting C++11 compilers, we would automatically catch this.
 
 The basic C++11 tool here is std::is_destructible from type_traits,
 but it has a problem: it only returns false if the destructor is
 deleted, it doesn't return false if the destructor is private. However,
 the example below shows how we can still achieve what we want by using
 wrapping the class that we are interested in as a member of a helper
 templated struct:
 
 
 
 #include type_traits
 #include iostream
 
 class ClassWithDeletedDtor {
   ~ClassWithDeletedDtor() = delete;
 };
 
 class ClassWithPrivateDtor {
   ~ClassWithPrivateDtor() {}
 };
 
 class ClassWithPublicDtor {
 public:
   ~ClassWithPublicDtor() {}
 };
 
 template typename T
 class IsDestructorPrivateOrDeletedHelper {
   T x;
 };
 
 template typename T
 struct IsDestructorPrivateOrDeleted
 {
   static const bool value =
 !std::is_destructibleIsDestructorPrivateOrDeletedHelperT::value;
 };
 
 int main() {
 #define PRINT(x) std::cerr  #x  =   (x)  std::endl;
 
   PRINT(std::is_destructibleClassWithDeletedDtor::value);
   PRINT(std::is_destructibleClassWithPrivateDtor::value);
   PRINT(std::is_destructibleClassWithPublicDtor::value);
 
   std::cerr  std::endl;
 
   PRINT(IsDestructorPrivateOrDeletedClassWithDeletedDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPrivateDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPublicDtor::value);
 }
 
 
 Output:
 
 
 std::is_destructibleClassWithDeletedDtor::value = 0
 std::is_destructibleClassWithPrivateDtor::value = 0
 std::is_destructibleClassWithPublicDtor::value = 1
 
 IsDestructorPrivateOrDeletedClassWithDeletedDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPrivateDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPublicDtor::value = 0
 
 
 If you also want to require classes to be final, C++11 type_traits
 also has std::is_final for that.
 
 Cheers,
 Benoit
 
 
 2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com
 mailto:dholb...@mozilla.com:
 
 Hi dev-platform,
 
 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:
 
  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')
 
  (b) Your class should be labeled as MOZ_FINAL (or, see below).
 
 
 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.
 
 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.
 
 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.
 
 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route

Re: Decision reached on: Consensus sought - when to reset try repository?

2014-04-30 Thread Daniel Holbert
On 03/07/2014 02:41 PM, Hal Wine wrote:
 On 2014-02-28 17:24 , Hal Wine wrote:
 tl;dr: what is the balance point between pushes to try taking too long
 and loosing repository history of recent try pushes?
 Based on the responses to this specific question, we'll go back to
 waiting for developers to notify IT when there is enough performance
 impact to warrant a reset of the try repository

As documented on
 https://bugzilla.mozilla.org/show_bug.cgi?id=994028
we've now had multiple instances in the past few weeks where Try has
been horked (refusing all pushes) for hours at a time, with no clear
reason why.

I'm not sure if this is caused by Try having too many heads  needing a
reset, but it seems like it could be. (It also could be *indirectly*
caused by the too-many-heads issue, too; e.g. perhaps someone
interrupted a push because it was taking too long (due to too many
heads), and their client inadvertently left something on the server
locked, which then locks everyone else out for hours.)

Whatever the cause, it's feeling more and more like periodic, automatic
Try resets would be helpful to keep things running smoothly.

Would it be possible to set up a system along the lines of dbaron's
suggestion earlier in this post? (Frequent resets, with a post-reset
step to pull in the most recent ~2 weeks worth of heads from the old
repo, so that people's try pushes don't mysteriously disappear if they
happen to push right before a reset.)

Thanks,
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enable -Wswitch-enum? [was Re: MOZ_ASSUME_UNREACHABLE is being misused]

2014-04-01 Thread Daniel Holbert
On 04/01/2014 08:56 AM, Zack Weinberg wrote:
 The downside of turning this on would be that any switch
 statements that *deliberately* include only a subset of the
 enumerators, plus a default case, would now have to be expanded to
 cover all the enumerators.  I would try it and report on how
 unpleasant that turns out to be, but my development box has taken
 to corrupting its filesystem if I merely do a 'hg update', so I'm
 kind of stuck, there. Anyone interested in trying it?

As a first approximation, it seems like we'd have to do this for a
most switch statements that include a 'default:' case. (I'm betting
that most of those switch statements have omitted *some* enum values,
since until now, there's been no reason to include 'default' unless
you're trying to be concise and merge some values together.)

grep finds 3934 default statements in .cpp files, 469 in .h files:
 $ grep -r default: | grep .cpp: | wc -l
 3934
 $ grep -r default: | grep .h: | wc -l
 469

So, we have on the order of ~4400 switch statements that would
potentially need expanding to avoid tripping this warning.

Having said that, if we *really* wanted to add this warning, I suppose
we could add it directory-by-directory (or only ever add it in certain
directories, as is the case with -Wshadow in layout/style).

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >