Re: Removal of NPAPI plugin support in Firefox 85

2020-11-13 Thread Eric Rescorla
This is amazing news.

-Ekr


On Fri, Nov 13, 2020 at 10:10 AM Jim Mathies  wrote:

> Hey all,
>
> I am happy to announce that NPAPI plugin support will end in Firefox 85.
> At the start of the 85 cycle the Plugins Team plans to land changes that
> disable NPAPI plugin loading and display. We'll also schedule PI testing in
> both Nightly and Beta on these changes.
>
> The initial landing is designed to accomplish the following:
>
> 1) Remove any evidence of NPAPI plugin support from the Firefox UX.
> 2) Support good content handling of missing Flash content.
> 3) Remove or disable tests that no longer work due to plugins failing to
> load.
> 4) Cleanup any critical areas of the codebase tied to NPAPI plugin support.
>
> Please note full removal of plugin related code will not take place in
> this landing. We wanted to keep things as simple as possible such that if
> for some unforeseen reason we needed to back these changes out we could.
> This is not anticipated but it's better to be safe than sorry.
>
> Plugin support touched on numerous areas of the codebase. The Plugins Team
> (David Parks) has been heavily invested in getting this first set of
> patches right, but it's possible we may have missed something critical. We
> would like to ask everyone who is aware of plugin code in their respective
> modules to take a look at the patches posted in bug 1675349. Please file
> bugs blocking bug 1675349 if you are aware of additional changes that might
> be needed in Fx85.
>
> Once we get past the Fx85 Nightly testing round and Fx85 merges to Beta,
> removal of plugin code from various areas of the codebase will be greenlit.
> David has put effort into the removal of the old codebase and will post
> additional patches for landing in 86 in the coming weeks.
>
> If you would like to help out, feel free to file bugs blocking meta bug
> 1677160 ('plugin-cleanup') related to future plugin code cleanup work. For
> feature related work dependent on plugin deprecation, please mark those
> bugs as dependent on meta bug 1455897 ('remove-plugin-support').
>
> Internal questions or comments should be directed to the Mozilla Slack
> #flash-kill channel. For public discussion you can join the #plugins
> chat.mozilla.org channel.
>
> Regards,
> Jim
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A new testing policy for code landing in mozilla-central

2020-09-16 Thread Eric Rescorla
On Tue, Sep 15, 2020 at 9:28 AM Andrew McCreight 
wrote:

> On Tue, Sep 15, 2020 at 8:03 AM Brian Grinstead 
> wrote:
>
> > (This is a crosspost from firefox-dev)
> >
> > Hi all,
> >
> > We’re rolling out a change to the review process to put more focus on
> > automated testing. This will affect you if you review code that lands in
> > mozilla-central.
> >
> > ## TLDR
> >
> > Reviewers will now need to add a testing Project Tag in Phabricator when
> > Accepting a revision. This can be done with the “Add Action” → “Change
> > Project Tags” UI in Phabricator. There's a screenshot of this UI at
> > https://groups.google.com/g/firefox-dev/c/IlHNKOKknwU/m/zl6cOlT2AgAJ.
> >
>
> I of course think having more tests is a good thing, but I don't like how
> this specific process places all of the burden of understanding and
> documenting some taxonomy of exceptions on the reviewer. Reviewing is
> already a fairly thankless and time consuming task. It seems like the way
> this is set up is due to the specifics of our how reviewing process is
> implemented, so maybe there's no way around it.
>

I certainly have some sympathy for this, but I'm not sure that it needs to
be
addressed via tools. What I would generally expect in cases like this is
that the reviewer says "why isn't there a test for X" and the author says
"for reason Y" and either the reviewer does or does not accept that.
That's certainly been my experience on both sides of this interaction
in previous instances where there were testing policies but not this
machinery.

Also, contrary to what ahal said, I don't know that tests being an official
> requirement relieves the burden of guilt of asking for tests, as everybody
> already knows that tests are good and that you should always write tests.
>

I also don't know about how this will impact people's internal states; I see
this as having two major benefits:

1. It tells people what we expect of them
2. It gives us the ability to measure what's actually happening and adjust
accordingly.

To expand on (2) a bit, if we look back and find that there was a lot of
code
landed without tests but where exceptions weren't filed, then we know we
need to work on one set of things. On the other hand, if we see that there
are a lot of exceptions being filed in cases we don't think there should
be (for whatever reason) then we need to work on a different set of things
(e.g., improving test frameworks in that area).



> My perspective might be a little distorted as I think a lot of the patches
> I write would fall under the exceptions, either because they are
> refactoring, or I am fixing a crash or security issue based on just a stack
> trace.
>
> Separately, one category of fixes I often deal with is fixing leaks or
> crashes in areas of the code I am unfamiliar with. I can often figure out
> the localized condition that causes the problem and correct that, but I
> don't really know anything about, say, service workers or networking to
> write a test. Do I need to hold off on landing until I can find somebody
> who is willing and able to write a test for me, or until I'm willing to
> invest the effort to become knowledgeable enough in an area of code I'm
> unlikely to ever look at again? Neither of those feel great to me.
>

Agreed that it's not an ideal situation. I think it's important to step
back and
ask how we got into that situation, though. I agree that it's not that
productive
for you to write the test, but hopefully *someone* at Mozilla understands
the
code and is able to write a test and if not we have some other problems,
right?
So what I would hope here is that you were able to identify the problem,
maybe write a fix and then hand it off to someone who could carry it over
the
line.



> Another testing difficulty I've hit a few times this year (bug 1659013 and
> bug 1607703) are crashes that happen when starting Firefox with unusual
> command line or environment options. I think we only have one testing
> framework that can test those kinds of conditions, and it seemed like it
> was in the process of being deprecated. I had trouble finding somebody who
> understood it, and I didn't want to spend more than a few hours trying to
> figure it out for myself, so I landed it without testing. To be clear, I
> could reproduce at least one of those crashes locally, but I wasn't sure
> how to write a test to create those conditions. Under this new policy, how
> do we handle fixing issues where there is not always a good testing
> framework already? Writing a test is hopefully not too much of a burden,
> but obviously having to develop and maintain a new testing framework could
> be a good deal of work.
>

As a general matter, I think we need to be fairly cautious about landing
code
where we aren't able to test. In some cases that means taking a step back
and doing a lot of work on the testing framework before you can write some
comparatively trivial code, which obviously is annoying at the time but also
is an 

Re: Intent to ship: accept spaces and tabs in unquoted values (of e.g. "filename") used in Content-Disposition parameterized header pairs to to align with other browsers

2020-08-19 Thread Eric Rescorla
Ugh. This does seem like the right thing to do in a bad situation. Thanks
and thanks to Anne for working to get the spec updated.

-Ekr

On Wed, Aug 19, 2020 at 10:10 AM Gijs Kruitbosch 
wrote:

> It's been pointed out to me that I neglected to merge the "intent to
> prototype" requirements into my email. So:
>
> Platform coverage: everywhere.
> Preference: no pref.
> DevTools bug: covered by existing network tooling (it already shows the
> full header).
> Other browsers: as noted, they already do this
> web-platform-tests: TBC. We have internal tests which were updated, but
> it doesn't appear WPT is set up to deal with this right now (by their
> nature, "attachment" content doesn't load in-browser) as it'd need to
> handle downloads and provide some way of asserting things about them.
> This will likely be tackled after / as part of spec changes.
>
> ~ Gijs
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Decentralized Identifier (DID) Working Group

2019-08-29 Thread Eric Rescorla
I tend to agree with you. Is there a reason not to formally object?

On Wed, Aug 28, 2019 at 3:30 PM L. David Baron  wrote:

> The W3C is proposing a new charter for:
>
>   Decentralized Identifier (DID) Working Group
>   https://www.w3.org/2019/08/did-wg-charter.html
>   https://lists.w3.org/Archives/Public/public-new-work/2019Aug/.html
>
> Mozilla has the opportunity to send comments or objections through
> this Saturday, August 31.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
>
> I'm pretty concerned that this group isn't a good use of W3C's
> resources, and that the promoters of this work and related areas of
> work have convinced various parties (e.g., government agencies like
> [1]) that this work is valuable, partly through the use of the W3C's
> reputation to promote this work.
>
> (I also feel like, while it's called decentralized, in practice it
> seems to require more centralization than the Web, which allows
> anyone to register a domain and then mint URLs.  I'm also skeptical
> of the privacy claims made in the groups charter.)
>
> That said, I think it's probably going to happen anyway no matter
> what we say, so I'm not sure what, if anything, to say in the
> review.  I'd probably be inclined to explicitly abstain from the
> review and add brief comments to that abstention.
>
> -David
>
> [1] https://lists.w3.org/Archives/Public/public-new-work/2019Aug/0013.html
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [C++] Intent to eliminate: `using namespace std; ` at global scope

2019-08-29 Thread Eric Rescorla
+1. This::sounds::like::a::great::change.

-Ekr


On Thu, Aug 29, 2019 at 12:13 PM Nathan Froyd  wrote:

> Hi all,
>
> In working on upgrading our C++ support to C++17 [1], we've run into
> some issues [2] surrounding the newly-introduced `std::byte` [3],
> various Microsoft headers that pull in definitions of `byte`, and
> conflicts between the two when one has done `using namespace std;`,
> particularly at global scope.  Any use of `using namespace $NAME` is
> not permitted by the style guide [4].
>
> A quick perusal of our code shows that we have, uh, "many" violations
> of the prohibition of `using namespace $NAME` [5].  I do not intend to
> boil the ocean and completely rewrite our codebase to eliminate all
> such violations.  However, since the use of `using namespace std;` is
> relatively less common (~100 files) and is blocking useful work,
> eliminating that pattern seems like a reasonable thing to do.
>
> Thus far, it appears that the problematic `using namespace std;`
> instances all appear at global scope.  We have a handful of
> function-scoped ones that do not appear to be causing problems; if
> those are easy to remove in passing, we'll go ahead and remove
> function-scoped ones as well.  The intent is to not apply this change
> to third-party code unless absolutely necessary; we have various ways
> of dealing with the aforementioned issues--if they even come up--in
> third-party code.
>
> The work is being tracked in [2].  Please do not add new instances of
> `using namespace std;` at global scope, or approve new instances in
> patches that you review; when this work is complete, we will ideally
> have a lint that checks for this sort of thing automatically.  If you
> would like to help with this project, please file blocking bugs
> against [2].
>
> Thanks,
> -Nathan
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1560664
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1577319
> [3] http://eel.is/c++draft/cstddef.syn#lib:byte
> [4] https://google.github.io/styleguide/cppguide.html#Namespaces
> [5]
> https://searchfox.org/mozilla-central/search?q=using+namespace+.%2B%3B=false=true=
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extension to integrate Searchfox features in Phabricator

2018-10-04 Thread Eric Rescorla
This is great. Thanks!

-Ekr


On Tue, Oct 2, 2018 at 7:04 AM Kartikaya Gupta  wrote:

> Neat! I took a quick peek at your addon source and the stuff it's
> relying on seems ok for now. At least, it's not anything we plan to
> change anytime soon. If there are changes to searchfox that would make
> your life easier in terms of expanding your addon, let me know and we
> can discuss possibilities.
>
> Cheers,
> kats
> On Tue, Oct 2, 2018 at 9:03 AM Marco Castelluccio
>  wrote:
> >
> > I've built an (experimental) WebExtension to integrate some of the
> > Searchfox features into Phabricator.
> > I find it very useful to be able to search code while reviewing, but I
> > have to resort to opening a new
> > Searchfox tab and looking for the code that is being modified. This
> > extension makes my workflow much
> > more pleasant.
> >
> > You can find it at
> > https://addons.mozilla.org/addon/searchfox-phabricator/ (the source code
> > is at
> > https://github.com/marco-c/mozsearch-phabricator-addon).
> >
> > Currently, the extension does:
> > 1) Highlight keywords when you hover them, highlighting them both in the
> > pre-patch and in the post-patch view;
> > 2) When you press on a keyword, it offers options to search for the
> > definition, callers, and so on (the results are opened on Searchfox in a
> > new tab).
> >
> > I'm planning to add support for sticky highlighting and blame
> > information (when hovering on the line number on the left side).
> >
> > - Marco.
> >
> > P.S.: The extension relies on parsing pages from Searchfox and from
> > Phabricator, so the maintenance might not be so easy. If there's enough
> > interest, I might keep maintaining it or talk with someone to make
> > Searchfox data accessible in an easier way.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Automated code analysis now also in Phabricator

2018-07-17 Thread Eric Rescorla
This is amazing and looks super-useful. Really looking forward to seeing
what else we can add in this area!

-Ekr


On Tue, Jul 17, 2018 at 6:22 AM, Jan Keromnes  wrote:

> TL;DR -- “reviewbot” is now enabled in Phabricator. It reports potential
> defects in pending patches for Firefox.
>
> Last year, we announced Code Review Bot (“reviewbot”, née “clangbot”), a
> Taskcluster bot that analyzes every patch submitted to MozReview, in order
> to automatically detect and report code defects *before* they land in
> Nightly:
>
> https://groups.google.com/d/msg/mozilla.dev.platform/TFfjCRd
> Gz_E/8leqTqvBCAAJ
>
> Developer feedback has been very positive, and the bot has caught many
> defects, thus improving the quality of Firefox.
>
> Today, we’re happy to announce that reviewbot analyzes every patch
> submitted to Phabricator as well.
>
> Here is an example of an automated review in Phabricator:
> https://phabricator.services.mozilla.com/D2120
>
> Additionally, we’ve made a number of improvements to this bot over the past
> months. Notably:
> - Enabled several clang-tidy checks to catch more C/C++ defects (e.g.
> performance and security issues)
> - Integrated Mozlint in order to catch JS/Python/wpt defects as well
> - Fixed several bugs, like the lowercase code snippets in comments
> - We’re now detecting up to 5x more defects in some patches
>
> Please report any bugs with the bot here: https://bit.ly/2tb8Qk3
>
> As for next steps, we’re currently discussing a few ideas for the project’s
> future, including:
> - Catch more bugs by comparing defects before & after a patch is applied
> (currently, we report defects located on lines that are directly modified
> by the patch)
> - Evaluate which defect types are generally being fixed or ignored
> - Evaluate analyzing Rust code with rust-clippy
> - Help with coding style by leveraging clang-format
> - Integrate more deeply with Phabricator, e.g. by reporting a build status
> for our analysis
> - Integrate our analysis with Try, in order to unify our various CI and
> code analysis tools
>
> Many thanks to everyone who helped make this a reality: Bastien, who did
> most of the implementation and bug fixing, Marco, Andi, Calixte, Sylvestre,
> Ahal, the Release Management Analysis team and the Engineering Workflow
> team.
>
> Jan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: open socket and read file inside Webrtc

2018-07-04 Thread Eric Rescorla
On Wed, Jul 4, 2018 at 5:24 AM,  wrote:

> Hi,
> I'm very new with firefox (as developer, of course).
> I need to open a file and tcp sockets inside webrtc.
> I read the following link
> https://wiki.mozilla.org/Security/Sandbox#File_System_Restrictions
> there is the sandbox that does not permit to open sockets or file
> descriptors.
> could you give me the way how I can solve these my problems?
> Thank you very much
>

There's no way to open raw sockets inside the web platform

-Ekr


> Angelo
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust crate approval

2018-07-01 Thread Eric Rescorla
On Sun, Jul 1, 2018 at 4:56 PM, Xidorn Quan  wrote:

> On Mon, Jul 2, 2018, at 9:03 AM, Eric Rescorla wrote:
> > On Sat, Jun 30, 2018 at 9:35 AM, Lars Bergstrom 
> > wrote:
> >
> > > On Fri, Jun 29, 2018 at 8:33 AM, Tom Ritter  wrote:
> > >
> > > >
> > > > I know that enumerating badness is never a comprehensive solution;
> but
> > > > maybe there could be a wiki page we could point people to for things
> that
> > > > indicate something is doing something scary in Rust?  This might let
> us
> > > > crowd-source these reviews in a safer manner. For example, what
> would I
> > > > look for in a crate to see if it was:
> > > >  - Adjusting memory permissions
> > > >  - Reading/writing to disk
> > > >  - Performing unsafe C/C++ pointer stuff
> > > >  - Performing network connections of any type
> > > >  - Calling out to syscalls or other kernel functions (especially
> > > win32k.sys
> > > > functions on Windows)
> > > >  - (whatever else you can think of...)
> > > > <https://lists.mozilla.org/listinfo/dev-platform>
> > > >
> > >
> > > ​Building on that, is there a list of crates that should *never* be
> > > included in Firefox that you could scan for? Such as, anything that is
> not
> > > nss (openssl bindings) or necko (use of a different network stack that
> > > might not respect proxies, threading concerns, etc.)​?
> >
> > Is this a crate-specific issue? Suppose that someone decided to land
> > a new C++ networking stack, that would presumably also be bad but
> > should be caught in code review, no?
>
> The point is that adding a new crate dependency is too easy accidentally,
> and it is very possible for reviewers to overlook that. So it may make
> sense to introduce a blacklist-ish thing to avoid that to happen.
>

But in order for it to have an effect other than just cluttering up the
build, it would have to be wired into Firefox. And if reviewers don't
notice that, we have bigger problems.

-Ekr

- Xidorn
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust crate approval

2018-07-01 Thread Eric Rescorla
On Sat, Jun 30, 2018 at 9:35 AM, Lars Bergstrom 
wrote:

> ​
>
> On Fri, Jun 29, 2018 at 8:33 AM, Tom Ritter  wrote:
>
> >
> > I know that enumerating badness is never a comprehensive solution; but
> > maybe there could be a wiki page we could point people to for things that
> > indicate something is doing something scary in Rust?  This might let us
> > crowd-source these reviews in a safer manner. For example, what would I
> > look for in a crate to see if it was:
> >  - Adjusting memory permissions
> >  - Reading/writing to disk
> >  - Performing unsafe C/C++ pointer stuff
> >  - Performing network connections of any type
> >  - Calling out to syscalls or other kernel functions (especially
> win32k.sys
> > functions on Windows)
> >  - (whatever else you can think of...)
> > 
> >
>
> ​Building on that, is there a list of crates that should *never* be
> included in Firefox that you could scan for? Such as, anything that is not
> nss (openssl bindings) or necko (use of a different network stack that
> might not respect proxies, threading concerns, etc.)​?


Is this a crate-specific issue? Suppose that someone decided to land
a new C++ networking stack, that would presumably also be bad but
should be caught in code review, no?

-Ekr

___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Devices and Sensors Working Group

2018-05-25 Thread Eric Rescorla
LGTM

-Ekr


On Fri, May 25, 2018 at 5:23 PM, L. David Baron  wrote:

> OK, sorry to not get this drafted until too close to the deadline to
> be likely to get feedback, but here's what I currently have for
> proposed comments that I'll submit on the charter.  (If you happen
> to be able to get feedback to me in the next 3 hours, great...
> otherwise feedback may still be useful for later followup
> discussions.)
>
> -David
>
> --
> We'd like to see this charter change a bit.  That desire for change
> comes out of our concern about the privacy implications of many of these
> APIs.  Researchers have demonstrated that a number of these APIs can be
> used for tracking users or for learning various other types of
> information about what users are doing, and some of these have actually
> been used for tracking web users.  See, for example, this pair of
> articles about use of the battery status API for tracking:
> https://www.theguardian.com/technology/2016/aug/02/
> battery-status-indicators-tracking-online
> https://www.theguardian.com/technology/2016/nov/01/
> firefox-disable-battery-status-api-tracking
>
> The APIs proposed in this charter have varying amounts of privacy risk.
> It is likely that some of them can be structured to provide a reasonable
> amount of information with meaningful user consent, but some of them
> cannot.  Therefore we'd like it to be more explicit in the charter that
> concluding that a specification cannot be done in an appropriate way is
> a possible success condition of the working group.  (The charter
> currently mentions that "APIs that cannot be demonstrated to be
> implementable securely within the default browser context will not be
> released.", but this doesn't explicitly mention privacy and it doesn't
> explicitly say that abandoning work is a desirable outcome if that's the
> appropriate choice.)
>
> The APIs that we're most concerned about in this regard are:
>   Battery Status API
> See articles cited above; we previously unshipped support for this.
>   Network Information API
> (I should have more details here, but don't.)
>   DeviceOrientation Event specification
> (I should have more details here, but don't.)
>   Proximity Sensor
>   Ambient Light Sensor
>   Accelerometer
>   Gyroscope
>   Magnetometer
>   Orientation Sensor
> These sensors have the problem that web access at a high enough
> resolution to be useful for many of the use cases allows sites using
> the API to learn various sorts of information about the user that
> are hard to explain in a way to get good informed consent, such as
> where the user is, what sort of activities they're doing, what
> they're typing, what activities are happening nearby, etc.
>
> See, for example:
> https://blog.lukaszolejnik.com/privacy-of-ambient-light-sensors/
> https://blog.lukaszolejnik.com/stealing-sensitive-
> browser-data-with-the-w3c-ambient-light-sensor-api/
> https://blog.lukaszolejnik.com/privacy-analysis-of-w3c-
> proximity-sensor/
> https://www.bleepingcomputer.com/news/software/firefox-
> gets-privacy-boost-by-disabling-proximity-and-ambient-light-sensor-apis/
> https://dieidee.eu/2015/10/14/accelerometer-and-gyroscope-
> sensor-data-can-create-a-serious-privacy-breach/
>
>
>
> While it's useful to have a forum that is appropriate for discussion of
> how to address these privacy issues, I don't think there is currently
> consensus that it is appropriate to make these APIs part of the web
> platform.  Normally I think that would suggest that the documents aren't
> ready to be put on the Recommendation track; in this case things are
> more awkward because many of them are already on the Recommendation
> track.  (That said, it's not entirely clear to me whether AC review of a
> charter is expected to represent consensus that a specification is
> appropriate for the Web.)
>
> Given that, it would be preferable to move some of these documents off
> of the Recommendation track back to earlier stages of incubation until
> there's a clearer path for addressing the privacy concerns with them.
> If that isn't possible, the working group should at least be given the
> explicit possible success criterion of choosing that particular
> specifications are not viable, and should be tasked with building
> broad consensus outside of the working group that the proposed APIs are
> suitable for the web.
> --
>
> On Friday 2018-05-04 11:40 -0700, Kyle Machulis wrote:
> > Filling in some of the gaps here:
> >
> > WakeLock API is still used on android, afaik so we don't let the phone
> turn
> > off while playing media.
> >
> > We ship the Vibration API on android, have since FxOS. Not much happening
> > there these days, other than some discussion about permissions around it.
> >
> > We still have battery code around Firefox, but the API isn't exposed to
> > content. I think we're still trying to figure out whether to just
> > completely remove 

Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-12 Thread Eric Rescorla
On Fri, May 11, 2018 at 4:06 PM, Gregory Szorc  wrote:

> On Wed, May 9, 2018 at 11:01 AM, Ted Mielczarek 
> wrote:
>
> > On Wed, May 9, 2018, at 1:11 PM, L. David Baron wrote:
> > > > mozregression won't be able to bisect into inbound branches then,
> but I
> > > > believe we've always been expiring build artifacts created from
> > integration
> > > > branches after a few months in any case.
> > > >
> > > > My impression was that people use mozregression primarily for
> tracking
> > down
> > > > relatively recent regressions. Please correct me if I'm wrong.
> > >
> > > It's useful for tracking down regressions no matter how old the
> > > regression is; I pretty regularly see mozregression finding useful
> > > data on bugs that regressed multiple years ago.
> >
> > To be clear here--we still have an archive of nightly builds dating back
> > to 2004, so you should be able to bisect to a single day using that. We
> > haven't ever had a great policy for retaining individual CI builds like
> > these tinderbox-builds. They're definitely useful, and storage is not
> that
> > expensive, but given the number of build configurations we produce
> nowadays
> > and the volume of changes being pushed we can't archive everything
> forever.
>
>
> It's worth noting that once builds are deterministic, a build system is
> effectively a highly advanced caching mechanism. It follows that cache
> eviction is therefore a tolerable problem: if the entry isn't in the cache,
> you just build again! Artifact retention and expiration boils down to a
> trade-off between the cost of storage and the convenience of accessing
> something immediately (as opposed to waiting several dozen minutes to
> populate the cache).
>
> The good news is that Linux Firefox builds have been effectively
> deterministic (modulo PGO and some minor build details like the build time)
> for several months now (thanks, glandium!). And moving to Clang on all
> platforms will make it easier to achieve deterministic builds on other
> platforms. The bad news is we still have many areas of CI that are not
> hermetic and attempts to retrigger Firefox build tasks in the future have a
> very high possibility of failing for numerous reasons (e.g. some dependent
> task of the build hits a 3rd party server that is no longer available or
> has deleted a file). In other words, our CI results may not be reproducible
> in the future. So if we delete an artifact, even though the build is
> deterministic, we may not have all the inputs to reconstruct that result.
>
> Making CI hermetic and reproducible far in the future is a hard problem.
> There are esoteric failure scenarios like "what if we need to fetch content
> from a server in 2030 but TLS 1.2 has been disabled due to a critical
> vulnerability and code in the hermetic build task doesn't support TLS 1.3."
> In order to realistically achieve reproducible builds in the future, we
> need to store *all* inputs somewhere reliable where they will always be
> available. Version control is one possibility. A content-indexed service
> like tooltool is another. (At Google, they check in the source code for
> Clang, glibc, binutils, Linux, etc into version control so all they need is
> a version revision and a bootstrap compiler (which I also suspect they
> check into the monorepo) to rebuild the world from source.)
>
> What I'm trying to say is we're making strides towards making builds
> deterministic and reproducible far in the future. So hopefully in a few
> years we won't need to be concerned about deleting old data because our
> answer will be "we can easily reproduce it at any time."
>

This might end up being true, but it seems a bit optimistic to me. I've
worked
with lots of systems much simpler than our builds that were in theory
reproducible
but then found when I went back to reproduce the results, things weren't so
simple.
You allude to one case above: it's one thing to have reproducible builds
from
days ago and quite another from years ago.

Given the incredibly low cost of storage (the street price of Glacier is
$.004/GB/month) [0]
I'd be pretty hesitant to delete data which we thought we might want to use
again
just because we figured we'd reproduce it.

-Ekr

[0] https://aws.amazon.com/glacier/

> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is super-review still a thing?

2018-04-20 Thread Eric Rescorla
On Fri, Apr 20, 2018 at 7:03 PM, Dave Townsend 
wrote:

> Presumably it supports multiple reviews for a patch, in which case I think
> we're fine.
>

It does.

-Ekr


> On Fri, Apr 20, 2018 at 3:03 PM Gregory Szorc  wrote:
>
> > On Fri, Apr 20, 2018 at 2:51 PM, L. David Baron 
> wrote:
> >
> > > On Friday 2018-04-20 14:23 -0700, Kris Maglione wrote:
> > > > For a lot of these patches, my opinion is only really critical for
> > > certain
> > > > architectural aspects, or implementation aspects at a few critical
> > > points.
> > > > There are other reviewers who are perfectly qualified to do a more
> > > detailed
> > > > review of the specifics of the patch, and have more spare cycles to
> > > devote
> > > > to it. Essentially, what's needed from me in these cases is a
> > > super-review,
> > > > which I can do fairly easily, but instead I become a bottleneck for
> the
> > > code
> > > > review as well.
> > > >
> > > > So, for the areas where I have this responsibility, I'd like to
> > > institute a
> > > > policy that certain types of changes need a final super-review from
> me,
> > > but
> > > > should get a detailed code review from another qualified reviewer
> when
> > > that
> > > > makes sense.
> > >
> > > I think it's reasonable to use the super-review flag for this sort
> > > of high-level or design review, at least until we come up with a
> > > better name for it (and make a new flag, and retire the old one).  I
> > > don't think the super-review policy (as written) is meaningful
> > > today.
> >
> >
> > FWIW I'm pretty sure Phabricator won't support the super-review flag. And
> > since we're aiming to transition all reviews to Phabricator...
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Commit messages in Phabricator

2018-02-12 Thread Eric Rescorla
On Mon, Feb 12, 2018 at 6:09 AM, Boris Zbarsky  wrote:

> On 2/11/18 3:57 PM, Emilio Cobos Álvarez wrote:
>
>> Arc wants to use something like:
>>
>
> So from my point of view, having the bug# easily linked from various
> places where the short summary is all that's shown (pushlogs especially) is
> pretty useful.  It saves loading a bunch of extra things when trying to go
> from regression-range pushlogs to the relevant bugs


Yes, this does seem convenient. I imagine we could develop tooling to deal
with other locations, but probably better not to.

Instead, maybe we can arrange for Phab/Lando to put the bug #in the short
message, potentially also with r=

-Ekr


>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship - Support already-enrolled U2F devices with Google Accounts for Web Authentication

2018-01-30 Thread Eric Rescorla
On Tue, Jan 30, 2018 at 8:49 AM, J.C. Jones  wrote:

> Summary: Support already-enrolled U2F devices with Google Accounts for Web
> Authentication
>
> Web Authentication is on-track to ship in Firefox 60 [1], and contains
> within it support for already-deployed USB-connected FIDO U2F devices, and
> we intend to ship with a spec extension feature implemented to support
> devices that were already-enrolled using the older U2F Javascript API [2].
> That feature depends on Firefox supporting the older API’s algorithm for
> relaxing the same-origin policy [3] which is not completely implemented in
> Firefox [4].
>
> It appears that many U2F JS API-compatible websites do not require the
> cross-origin features currently unimplemented in Firefox, but notably the
> Google Accounts service does: For historical reasons (being the first U2F
> implementor) their FIDO App ID  is “www.gstatic.com” [5] for logins to “
> google.com” and its subdomains [6]. Interestingly, as the links to
> Chromium’s source code in [5] and [6] show, Chrome chooses to hardcode the
> approval of this same-origin override rather than complete the
> specification’s algorithm for this domain.
>
> As mentioned in the bug linked in [4], I have a variety of reservations
> with the U2F Javascript API’s algorithm. I also recognize that Google
> Accounts is the largest player in existing U2F device enrollments. The
> purpose of the extension feature in [2] is to permit users who already are
> using U2F devices to be able to move seamlessly to Web Authentication --
> and hopefully also be able to use browsers other than Chrome to do it.
>
> After discussions with appropriate Googlers confirmed that the “
> www.gstatic.com” origin used in U2F is being retired as part of their
> change-over to Web Authentication, I propose to hard-code support in Gecko
> to permit Google Accounts’ cross-origin U2F behavior, the same way as
> Chrome has. I propose to do this for a period of 5 years, until 2023, and
>

Five years seems very long to keep this around. 1-2 seems a lot more
appropriate. When is the gstatic migration goingt o be complete?

-Ekr


> to file a bug to remove this code around that date. That would give even
> periodically-used U2F-protected Google accounts ample opportunity to
> re-enroll their U2F tokens with the new Web Authentication standard and
> provide continuity-of-access. The code involved would be a small search
> loop, similar to Chrome’s in [6].
>
> If we choose not to do this, Google Accounts users who currently have U2F
> enabled will not be able to authenticate using Firefox until their existing
> U2F tokens are re-enrolled using Web Authentication -- meaning not only
> will Google need to change to the Web Authentication API, they will also
> have to prompt users to go back through the enrollment ceremony. This
> process is likely to take several years.
>
> Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=webauthn
>
> Spec: https://www.w3.org/TR/webauthn/
>
> Estimated target release: 60
>
> Preference behind which this is implemented:
> security.webauth.webauthn
>
> DevTools support:
> N/A
>
> Support by other browser engines:
> - Blink: In-progress
> - Edge: In-progress
> - Webkit: No public announcements
>
> Testing:
> Mochitests in-tree; https://webauthn.io/; https://webauthn.bin.coffee/;
> https://webauthndemo.appspot.com/; Web Platform Tests in-progress
>
>
> Cheers,
> J.C. Jones and Tim Taubert
>
> [1]
> https://groups.google.com/d/msg/mozilla.dev.platform/
> tsevyqfBHLE/lccldWNNBwAJ
>
> [2] https://w3c.github.io/webauthn/#sctn-appid-extension and
> https://bugzilla.mozilla.org/show_bug.cgi?id=1406471
>
> [3]
> https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fido-appid-and-
> facets-v1.2-ps-20170411.html
>
> [4]
> https://groups.google.com/d/msg/mozilla.dev.platform/
> UW6WMmoDzEU/8h7DFOfsBQAJ
> and https://bugzilla.mozilla.org/show_bug.cgi?id=1244959
>
> [5]
> https://chromium.googlesource.com/chromium/src.git/+/master/
> chrome/browser/extensions/api/cryptotoken_private/
> cryptotoken_private_api.cc#30
>
> [6]
> https://chromium.googlesource.com/chromium/src.git/+/master/
> chrome/browser/extensions/api/cryptotoken_private/
> cryptotoken_private_api.cc#161
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Password autofilling

2018-01-09 Thread Eric Rescorla
On Tue, Jan 9, 2018 at 8:43 AM, Gervase Markham  wrote:

> On 01/01/18 20:08, Jonathan Kingston wrote:
> > A recent research post[1] have highlighted the need for Firefox to
> disable
> > autofilling of credentials. The research post suggests web trackers are
> > using autofilling to track users around the web.
>
> Autofill is restricted to same-domain (roughly) so how can they track
> users "around the web"?



The third party JS is loaded into the page's context:

"Thus, third-party javascript can retrieve the saved credentials by
creating a form with the username and password fields, which will then be
autofilled by the login manager."



Other than not being cleared when cookies are cleared, how is this
> technique more powerful than a cookie containing one's email address?
>

Being unclearable is certainly more powerful, but it also allows
cross-correlation
between different tracking domains because the identifiers are stable.

-Ekr


> Autofill is an extremely, extremely convenient browser function, and the
> fact that Firefox's current implementation doesn't always do the right
> thing (e.g. offering me 3 choices of username and, when I pick one, 3
> choices of password rather than autofilling the one which matches the
> username, ) is a source of regular frustration. Let's not break
> the usability more.
>
> Gerv
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Second Screen Working Group

2018-01-05 Thread Eric Rescorla
LGTM!

On Thu, Jan 4, 2018 at 9:56 PM, L. David Baron  wrote:

> So I think Martin, Peter, and I share similar concerns here, and I'm
> inclined to turn those concerns into an objection to this charter.
>
> So how does this sound for proposed comments on the charter
> (submitted as a formal objection)?  Note that I've tried to turn the
> comments into a specific suggestion for a remedy, but I'm far from
> sure if that suggestion is the right one.
>
> I've avoided mentioning the comment about "further changes" in the
> specs that the existing working group has in CR, to avoid
> distracting from what I think is the main piece.  But let me know if
> you see a good way to work it in.
>
> But I'd be particularly interested to hear if SC thinks this might
> be harmful rather than helpful to the end goal for some reason, or
> if he has other disagreements with this approach, or better
> suggestions for what remedy we should suggest.
>
> -David
>
> =
>
> The current situation with the API developed by this Working Group
> is that it is a API for a web page to interact with a connection
> between the web browser and a separate screen that exists entirely
> in a closed ecosystem.  For example, a browser made by Google might
> connect to displays that support the proprietary Chromecast
> protocol, whereas one made by apple might connect to displays that
> support the proprietary AirPlay protocol.
>
> We know that parts of an Open Screen Protocol are in an early stage
> of development at https://github.com/webscreens/openscreenprotocol
> (as linked from the charter), and the goal of this work is to
> improve on this situation.  We hope it will allow for interoperable
> discovery of, identification of, and communication with presentation
> displays.  However, we're deeply concerned about chartering a second
> iteration of the work that continues building the Presentation API
> on top of a closed ecosystem, when the work to make the ecosystem
> more open has a lower priority.  While we understand that the work
> on building an open ecosystem still requires incubation, we believe
> it should have the highest priority in this space.  We believe that
> rechartering the Second Screen WG should wait until that work is
> ready to be in a working group, and that advancing the current
> specifications (developed under the existing charter) to Proposed
> Recommendation probably depends on this new work in order to
> demonstrate real interoperability, although we are open to other
> paths toward fixing this situation.
>
>
> On Thursday 2018-01-04 09:29 -0700, Peter Saint-Andre wrote:
> > +1 to Martin's feedback.
> >
> > On 1/3/18 10:19 PM, Martin Thomson wrote:
> > > Without the protocol pieces, this remains vendor-specific.  We should
> > > comment on this and make it clear that we think that definition of a
> > > generic protocol for interacting with the second display has not been
> > > given sufficient priority.  Allowing this to proceed without a generic
> > > protocol would be bad for the ecosystem.
> > >
> > > From what I can see, there seem to be a bunch of options that are
> > > described for the protocol, without extremely scant detail.  Certainly
> > > not enough to implement anything.
> > >
> > > I'm concerned with the statement "This Working Group does not
> > > anticipate further changes to this specification" regarding the
> > > presentation API.  I haven't reviewed this thoroughly, but there
> > > appear to be some gaps in rather fundamental pieces.  For instance -
> > > and maybe this doesn't change the API at all - but the means of
> > > identification for screens is unclear.  Some of these details are
> > > important, such as whether knowledge of a presentation URL is all the
> > > information necessary to use that URL (i.e., are they capability
> > > URLs?).
> > >
> > > On Thu, Jan 4, 2018 at 2:31 PM, Shih-Chiang Chien 
> wrote:
> > >> The SecondScreen WG intended to move the protocol development to CG,
> and
> > >> will possibly move to IETF after the incubation phase.
> > >> The revised charter is trying to associate the work of CG to the
> timeline
> > >> of Presentation API development.
> > >>
> > >> At the meantime, WG will tackle the testability issue found while
> creating
> > >> test cases and cultivating Level 2 API requirements for advanced use
> cases.
> > >>
> > >> I'll vote to support this revised charter.
> > >>
> > >> Best Regards,
> > >> Shih-Chiang Chien
> > >> Mozilla Taiwan
> > >>
> > >> On Thu, Jan 4, 2018 at 10:08 AM, L. David Baron 
> wrote:
> > >>
> > >>> The W3C is proposing a revised charter for:
> > >>>
> > >>>   Second Screen Working Group
> > >>>   https://w3c.github.io/secondscreen-charter/
> > >>>   https://lists.w3.org/Archives/Public/public-new-work/
> 2017Dec/.html
> > >>>
> > >>> Mozilla has the opportunity to send comments or objections through
> > >>> Friday, January 52.  (Sorry for failing to send this out 

Re: Hiding 'new' statements - Good or Evil?

2017-11-28 Thread Eric Rescorla
On Mon, Nov 27, 2017 at 6:41 PM, Xidorn Quan <m...@upsuper.org> wrote:

> On Tue, Nov 28, 2017, at 11:45 AM, Eric Rescorla wrote:
> > On Mon, Nov 27, 2017 at 4:07 PM, smaug <sm...@welho.com> wrote:
> > > And auto makes code reading harder. It hides important information like
> > > lifetime management.
> > > It happens easily with auto that one doesn't even start to think
> whether
> > > nsCOMPtr/RefPtr should be used there.
> >
> > This statement seems perhaps a touch to universalist; it may be the case
> > that it makes it harder for *you* to read, but I find it makes it easier
> > for me to read. I also don't think it's a coincidence that it's a common
> > feature of modern languages (Rust and Go, to name just two), so I suspect
> > I'm not alone in thinking auto is a good thing.
>
> Using Rust and Go as examples isn't very fair.
>
> Go has GC, and Rust has compile-time lifetime checking, so they are
> basically free from certain kind of lifetime issues.


I don't think that it's unfair. My point was that these are features that
language designers and users think are desirable, which is why they get
added to languages. After all, you could certainly build a language like
Rust or Go that didn't have any type inference; people would just wonder
what you were thinking. And I strongly suspect they got added to C++ for
the same reason.

With that said, this kind of lifetime issue in C++ largely arises from the
use of raw pointers (or auto_ptr, I guess, though that's been deprecated),
and in the specific cases under discussion here, by intermixing old-style
memory management with a bunch of the newer features. We could of course
deal with that by not letting anybody use the new features, but it seems
like a more promising direction would be to move to a more modern memory
management idiom, which C++-14 already supports.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-27 Thread Eric Rescorla
On Mon, Nov 27, 2017 at 4:07 PM, smaug  wrote:

> On 11/28/2017 12:53 AM, Jeff Gilbert wrote:
>
>> ranged-for issues are the same as those for doing manual iteration,
>>
> It is not, in case you iterate using
> for (i = 0; i < foo.length(); ++i)
> And that is the case which has been often converted to ranged-for, with
> bad results.
>
> And auto makes code reading harder. It hides important information like
> lifetime management.
> It happens easily with auto that one doesn't even start to think whether
> nsCOMPtr/RefPtr should be used there.
>

This statement seems perhaps a touch to universalist; it may be the case
that it makes it harder for *you* to read, but I find it makes it easier
for me to read. I also don't think it's a coincidence that it's a common
feature of modern languages (Rust and Go, to name just two), so I suspect
I'm not alone in thinking auto is a good thing.

As for the lifetime question, can you elaborate on the scenario you are
concerned about. Is it something like:

T *foo() {
   return new T();
}

void bar() {
   auto t = foo();
}

Where you think that t should be assigned to a smart pointer? (Obviously,
there are ways for this to cause UAF as well, though this one is a leak).

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-24 Thread Eric Rescorla
On Thu, Nov 23, 2017 at 4:00 PM, smaug  wrote:

> On 11/23/2017 11:54 PM, Botond Ballo wrote:
>
>> I think it makes sense to hide a 'new' call in a Make* function when
>> you're writing an abstraction that handles allocation *and*
>> deallocation.
>>
>> So MakeUnique makes sense, because UniquePtr takes ownership of the
>> allocated object, and will deallocate it in the destructor. MakeRefPtr
>> would make sense for the same reason.
>>
> I almost agree with this, but, all these Make* variants hide the
> information that they are allocating,
> and since allocation is slow, it is usually better to know when allocation
> happens.
> I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear about
> the functionality.
>

This seems like a reasonable argument in isolation, but I think it's more
important to mirror the standard C++ mechanisms and C++-14 already defines
std::make_unique.

As a post-script, given that we now can use C++-14, can we globally replace
the MFBT clones of C++-14 mechanisms with the standard ones?

-Ekr


>
>
> -Olli
>
>
>
>> But in cases where a library facility is not taking ownership of the
>> object, and thus the user will need to write an explicit 'delete', it
>> makes sense to require that the user also write an explicit 'new', for
>> symmetry.
>>
>> NotNull is a bit of a weird case because it can wrap either a raw
>> pointer or a smart pointer, so it doesn't clearly fit into either
>> category. Perhaps it would make sense for MakeNotNull to only be
>> usable with smart pointers?
>>
>> Cheers,
>> Botond
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-19 Thread Eric Rescorla
On Tue, Sep 19, 2017 at 10:21 AM, Eric Rescorla <e...@rtfm.com> wrote:

>
> On Tue, Sep 19, 2017 at 9:20 AM, Andrew McCreight <amccrei...@mozilla.com>
> wrote:
>
>> On Tue, Sep 19, 2017 at 8:49 AM, Eric Rescorla <e...@rtfm.com> wrote:
>>
>> > Generally no, but this is an unfortunate consequence of Mozilla's
>> decision
>> > a while ago to pick a VCS which has not turned out to be the dominant
>> > choice (not placing blame here, but I think it's clear that's how it's
>> > turned out). The result is that a lot of people want to use git and that
>> > means git needs to have a first class experience.
>> >
>>
>> I've been using git for years now to develop Firefox, and I feel like it
>> is
>> a first class experience. There's a one time cost to setting up cinnabar,
>> but after that, everything just works, including |mach try| and |mach
>> mozreview|. It is still probably less setup than Mercurial users have to
>> go
>> through to install enough extensions to make hg usable. ;) Sure, there's a
>> bit of "wacky custom machinery", but developers using hg also have to deal
>> with some of that, so that's more of a Firefox developer problem than a
>> git
>> Firefox developer problem.
>>
>
> Maybe you have different workflows than I do.
>
> Specifically, I want to be able to do all of:
>
> 1. Push to try
> 2. Land CLs directly
> 3. Push to review (I mostly use phabricator so, that works fine)
> 4. Push branches to Github
>
> It's the combination of 1/2 and 4 that works badly because hg/cinnabar and
> github are
> basically just separate histories, at least with nss-dev but IIRC with
> gecko-dev.
>

Bobby and I chatted offline and it seems that if you use mozilla/gecko
rather than mozilla/gecko-dev that this works properly. If so, then it
seems like having well-supported github-based cinnabar clones of all our
repos would go a long way towards addressing my issues.

-Ekr


>
> I've also had cinnabar fail badly at various times, and then it's been
> pretty unclear what
> the service level guarantees for that were.
>
> -Ekr
>
>
>>
>> Andrew
>>
>>
>>
>> > -Ekr
>> >
>> >
>> > > Originals to end.
>> > >
>> > >
>> > > On Mon, Sep 18, 2017 at 5:16 AM, Eric Rescorla <e...@rtfm.com> wrote:
>> > >
>> > >> On Mon, Sep 18, 2017 at 2:56 AM, James Graham <
>> ja...@hoppipolla.co.uk>
>> > >> wrote:
>> > >>
>> > >> > On 18/09/17 04:05, Eric Rescorla wrote:
>> > >> >
>> > >> > But that's just a general observation; if you look at this specific
>> > >> case,
>> > >> >>> it might not be much effort to support native git for
>> richer/future
>> > >> try
>> > >> >>> pushing. But that's very different from requiring all the tools
>> to
>> > >> >>> support
>> > >> >>> native git on an equal basis. And it seems reasonable to evaluate
>> > the
>> > >> >>> utility of this specific support via a poll, even one known to be
>> > >> biased.
>> > >> >>>
>> > >> >>>
>> > >> >> I don't think that's true, for the reasons I indicated above.
>> Rather,
>> > >> >> there's a policy decision about whether we are going to have Git
>> as a
>> > >> >> first-class thing or whether we are going to continue force
>> everyone
>> > >> who
>> > >> >> uses Git to fight with inadequate workflows. We know there are
>> plenty
>> > >> of
>> > >> >> people who use Git.
>> > >> >>
>> > >> >
>> > >> > I don't entirely understand what concrete thing is being proposed
>> > here.
>> > >> As
>> > >> > far as I can tell the git-hg parameter space contains the following
>> > >> points:
>> > >> >
>> > >> >  1. Use hg on the server and require all end users to use it
>> > >> >  2. Use git on the server and require all end users to use it
>> > >> >  3. Use hg on the server side and use client-side tooling to allow
>> git
>> > >> > users to interact with the repository
>> > >> >  4. Use git on the server side and use client-side tooling to
>> allow hg
>> > >> 

Re: Intent to require `mach try` for submitting to Try

2017-09-19 Thread Eric Rescorla
On Tue, Sep 19, 2017 at 9:20 AM, Andrew McCreight <amccrei...@mozilla.com>
wrote:

> On Tue, Sep 19, 2017 at 8:49 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
> > Generally no, but this is an unfortunate consequence of Mozilla's
> decision
> > a while ago to pick a VCS which has not turned out to be the dominant
> > choice (not placing blame here, but I think it's clear that's how it's
> > turned out). The result is that a lot of people want to use git and that
> > means git needs to have a first class experience.
> >
>
> I've been using git for years now to develop Firefox, and I feel like it is
> a first class experience. There's a one time cost to setting up cinnabar,
> but after that, everything just works, including |mach try| and |mach
> mozreview|. It is still probably less setup than Mercurial users have to go
> through to install enough extensions to make hg usable. ;) Sure, there's a
> bit of "wacky custom machinery", but developers using hg also have to deal
> with some of that, so that's more of a Firefox developer problem than a git
> Firefox developer problem.
>

Maybe you have different workflows than I do.

Specifically, I want to be able to do all of:

1. Push to try
2. Land CLs directly
3. Push to review (I mostly use phabricator so, that works fine)
4. Push branches to Github

It's the combination of 1/2 and 4 that works badly because hg/cinnabar and
github are
basically just separate histories, at least with nss-dev but IIRC with
gecko-dev.

I've also had cinnabar fail badly at various times, and then it's been
pretty unclear what
the service level guarantees for that were.

-Ekr


>
> Andrew
>
>
>
> > -Ekr
> >
> >
> > > Originals to end.
> > >
> > >
> > > On Mon, Sep 18, 2017 at 5:16 AM, Eric Rescorla <e...@rtfm.com> wrote:
> > >
> > >> On Mon, Sep 18, 2017 at 2:56 AM, James Graham <ja...@hoppipolla.co.uk
> >
> > >> wrote:
> > >>
> > >> > On 18/09/17 04:05, Eric Rescorla wrote:
> > >> >
> > >> > But that's just a general observation; if you look at this specific
> > >> case,
> > >> >>> it might not be much effort to support native git for
> richer/future
> > >> try
> > >> >>> pushing. But that's very different from requiring all the tools to
> > >> >>> support
> > >> >>> native git on an equal basis. And it seems reasonable to evaluate
> > the
> > >> >>> utility of this specific support via a poll, even one known to be
> > >> biased.
> > >> >>>
> > >> >>>
> > >> >> I don't think that's true, for the reasons I indicated above.
> Rather,
> > >> >> there's a policy decision about whether we are going to have Git
> as a
> > >> >> first-class thing or whether we are going to continue force
> everyone
> > >> who
> > >> >> uses Git to fight with inadequate workflows. We know there are
> plenty
> > >> of
> > >> >> people who use Git.
> > >> >>
> > >> >
> > >> > I don't entirely understand what concrete thing is being proposed
> > here.
> > >> As
> > >> > far as I can tell the git-hg parameter space contains the following
> > >> points:
> > >> >
> > >> >  1. Use hg on the server and require all end users to use it
> > >> >  2. Use git on the server and require all end users to use it
> > >> >  3. Use hg on the server side and use client-side tooling to allow
> git
> > >> > users to interact with the repository
> > >> >  4. Use git on the server side and use client-side tooling to allow
> hg
> > >> > users to interact with the repository
> > >> >  5. Provide some server side magic to present both git and hg to
> > clients
> > >> > (with git, hg, or something else, as the on-disk format)
> > >> >
> > >> > These all seem to have issues relative to the goal of "vanilla git
> > with
> > >> no
> > >> > custom code":
> > >> >
> > >> >  1. Doesn't allow git to be used at all.
> > >> >  2. Requires a multi-year transition away from hg. Probably not
> > popular
> > >> > with hg fans.
> > >> >  3. The status quo. Requires using a library for converting between
> hg
> > >> and
> > >> > git (i.e. cinnabar) or som

Re: Intent to require `mach try` for submitting to Try

2017-09-19 Thread Eric Rescorla
On Tue, Sep 19, 2017 at 8:40 AM, Aki Sasaki <asas...@mozilla.com> wrote:

> On 9/16/17 6:43 AM, Eric Rescorla wrote:
>
>> 2. There are a lot more people writing code for Firefox than developing
>> the
>> internal tools, so in general, costs on those people should be avoided.
>
>
>
> On Mon, Sep 18, 2017 at 5:16 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> And having something complex and scary on the server
>> side is (at least to me) obviously better than having something complex
>> and
>>  scary (and did I mention constantly out of date) on the client side,
>> because it's all in one place and externally just looks like the thing the
>> client is expecting. Note that we already have half of this because we
>> have
>> one-way synching to gecko-dev on the server side. Perhaps one could also
>> rip off some of the servo-gecko synching stuff here, but I don't know much
>> about how that's architected.
>>
>
> I might be reading the first statement wrong: I see that as an argument to
> avoid putting more costs on the people who develop and maintain internal
> tools, because it's a smaller group. If this is an accurate reading, then
> the second paragraph appears to be contradictory: moving the pain and
> complexity around git from all Firefox developers onto this smaller group
> will increase costs for that smaller group. From my perspective, gecko-dev
> conversions have largely been as smooth as can be expected for the past
> number of years, but we're seeing more timeouts, and it may only be a
> matter of time before we either need to sink significant internal tools
> development time into the conversion script, or we hit an issue that causes
> significant downtime.
>

Sorry I wasn't clear, because I meant the opposite: the purpose of tooling
is to make the experience for the majority of developers as smooth as
possible, because there are a lot more people doing development of Firefox
than writing tools. Accordingly, things which move load from the people
developing tooling to everyone else should generally be avoided unless the
cost on everyone else is trivial and the cost of the tooling is very high.
That may or may not mean that the tools group needs to be bigger, but
that's part of the cost/benefit calculation. Certainly, we shouldn't take
the size of the tools group as fixed for purposes of that assessment.


If I'm reading wrong, and you're saying we need to avoid putting costs on
> the larger Firefox development group, then the two statements are
> non-contradictory. I do think expecting supporting 2 VCSes as seamlessly as
> one is a large ask of the smaller team; do many other software projects
> support multiple official VCSes and workflows?
>

Generally no, but this is an unfortunate consequence of Mozilla's decision
a while ago to pick a VCS which has not turned out to be the dominant
choice (not placing blame here, but I think it's clear that's how it's
turned out). The result is that a lot of people want to use git and that
means git needs to have a first class experience.

-Ekr


> Originals to end.
>
>
> On Mon, Sep 18, 2017 at 5:16 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> On Mon, Sep 18, 2017 at 2:56 AM, James Graham <ja...@hoppipolla.co.uk>
>> wrote:
>>
>> > On 18/09/17 04:05, Eric Rescorla wrote:
>> >
>> > But that's just a general observation; if you look at this specific
>> case,
>> >>> it might not be much effort to support native git for richer/future
>> try
>> >>> pushing. But that's very different from requiring all the tools to
>> >>> support
>> >>> native git on an equal basis. And it seems reasonable to evaluate the
>> >>> utility of this specific support via a poll, even one known to be
>> biased.
>> >>>
>> >>>
>> >> I don't think that's true, for the reasons I indicated above. Rather,
>> >> there's a policy decision about whether we are going to have Git as a
>> >> first-class thing or whether we are going to continue force everyone
>> who
>> >> uses Git to fight with inadequate workflows. We know there are plenty
>> of
>> >> people who use Git.
>> >>
>> >
>> > I don't entirely understand what concrete thing is being proposed here.
>> As
>> > far as I can tell the git-hg parameter space contains the following
>> points:
>> >
>> >  1. Use hg on the server and require all end users to use it
>> >  2. Use git on the server and require all end users to use it
>> >  3. Use hg on the server side and use client-side tooling to allow git
>> > users to i

Re: Intent to require `mach try` for submitting to Try

2017-09-18 Thread Eric Rescorla
On Mon, Sep 18, 2017 at 2:56 AM, James Graham <ja...@hoppipolla.co.uk>
wrote:

> On 18/09/17 04:05, Eric Rescorla wrote:
>
> But that's just a general observation; if you look at this specific case,
>>> it might not be much effort to support native git for richer/future try
>>> pushing. But that's very different from requiring all the tools to
>>> support
>>> native git on an equal basis. And it seems reasonable to evaluate the
>>> utility of this specific support via a poll, even one known to be biased.
>>>
>>>
>> I don't think that's true, for the reasons I indicated above. Rather,
>> there's a policy decision about whether we are going to have Git as a
>> first-class thing or whether we are going to continue force everyone who
>> uses Git to fight with inadequate workflows. We know there are plenty of
>> people who use Git.
>>
>
> I don't entirely understand what concrete thing is being proposed here. As
> far as I can tell the git-hg parameter space contains the following points:
>
>  1. Use hg on the server and require all end users to use it
>  2. Use git on the server and require all end users to use it
>  3. Use hg on the server side and use client-side tooling to allow git
> users to interact with the repository
>  4. Use git on the server side and use client-side tooling to allow hg
> users to interact with the repository
>  5. Provide some server side magic to present both git and hg to clients
> (with git, hg, or something else, as the on-disk format)
>
> These all seem to have issues relative to the goal of "vanilla git with no
> custom code":
>
>  1. Doesn't allow git to be used at all.
>  2. Requires a multi-year transition away from hg. Probably not popular
> with hg fans.
>  3. The status quo. Requires using a library for converting between hg and
> git (i.e. cinnabar) or some mozilla-specific custom scripts (the old
> moz-git-tools)
>  4. Like 3. but with an additional multi-year transition and different
> custom tooling.
>  5. Allows vanilla git and hg on the client side, but requires something
> complex, custom, and scary on the server side to allow pushing to either
> repo. Could be possible if we eliminate ~all manual pushes (i.e. everything
> goes via autoland), but cinnabar or similar is still there in the
> background.
>

This is what I meant. And having something complex and scary on the server
side is (at least to me) obviously better than having something complex and
 scary (and did I mention constantly out of date) on the client side,
because it's all in one place and externally just looks like the thing the
client is expecting. Note that we already have half of this because we have
one-way synching to gecko-dev on the server side. Perhaps one could also
rip off some of the servo-gecko synching stuff here, but I don't know much
about how that's architected.


Given none of those options seem to fit, the only other possibility I can
> think of is to skip the general problem of how to interact with the VCS for
> try specifically by making submitting to try not look like a VCS push, but
> like some other way of sending a blob of code to a remote machine (e.g.
> using some process that generates a patch file). But unless there's some
> extant way to achieve that it seems like it would be replacing
> known-working code (cinnabar) with new custom code.
>

> So my best guess is that you mean that all pushes should go via autoland
> and we should provide read-only hg/git repositories, and try pushes should
> also go via something other than a vcs push. But I'm probably missing
> something.


Well, I do think this is the direction we should be moving towards, as it
seems to have a pile of advantages. Indeed, if we're *already* going to be
forcing people to submit to try via mach rather than via vcs push, why
wouldn't we do that for this piece of it now?

-Ekr


>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-18 Thread Eric Rescorla
On Mon, Sep 18, 2017 at 1:10 AM, Henri Sivonen <hsivo...@hsivonen.fi> wrote:

> On Mon, Sep 18, 2017 at 6:05 AM, Eric Rescorla <e...@rtfm.com> wrote:>
> I don't think that's true, for the reasons I indicated above. Rather,
> > there's a policy decision about whether we are going to have Git as a
> > first-class thing or whether we are going to continue force everyone who
> > uses Git to fight with inadequate workflows. We know there are plenty of
> > people who use Git.
>
> Besides occasionally having to convert between hg and cinnabar
> changeset ids, what's inadequate about cinnabar workflow(s)?
>

That's a big part of it. I've also had it fail randomly and for unobvious
reasons (especially if you have some kind of unconventional hg install) and
then had to figure out why. And of course it means you need to install and
maintain hg and cinnabar, not just git. It's just another piece of wacky
custom machinery that we make devs suffer through.

-Ekr


>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-17 Thread Eric Rescorla
On Sun, Sep 17, 2017 at 12:09 PM, Steve Fink <sf...@mozilla.com> wrote:

> On 9/16/17 6:43 AM, Eric Rescorla wrote:
>
>> On Fri, Sep 15, 2017 at 9:25 PM, Gregory Szorc <g...@mozilla.com> wrote:
>>
>>
>> I'd prefer to take a data-driven approach to answering the question of "do
>>> we need to support vanilla Git." I wish I had local developer metrics or
>>> results from a recent developer survey to feed into a decision. In lieu
>>> of
>>> that data, I'd encourage users of vanilla Git to make noise on bug
>>> 1400470
>>> if you feel you need `mach try` support.
>>>
>>> This is not a good decision making procedure. First, requiring people to
>> volunteer their objections inherently underweights those objections
>> because
>> it's work to do so. Second, the current situation has incentivized people
>> to use cinnabar, so the number of vanilla git users is less than it
>> otherwise would be. But this is bad, for the reasons I listed above. The
>> right question is: what future do we want to live in, and that's a policy
>> question, not one decided by taking a (highly biased) poll of what people
>> currently do.
>>
>
> Supporting more tool variants (eg adding bare git to the current two)
> requires more resources.


Well, if we simply supported vanilla git properly, we wouldn't need
cinnabar support, so it's not strictly greater.



> In a perfect world, that resource decision would be based on the actual
> costs and benefits to the overall organization. My observation is that so
> far, we have kept a tight limit on the number of resources dedicated to
> this sort of tooling, independent of an objective cost/benefit analysis.
> (Which is even somewhat justifiable; tooling, like many other areas, can
> easily grow to absorb whatever resources you give it, so it's hard to judge
> "enough".)
>
> This means that I have a lot of sympathy for the tooling people who would
> be forced to maintain the wider variation, with no reason to believe that
> the necessary resources will be made available. It will just starve out
> resources for other tooling, limiting us closer to a common denominator set
> of functionality.
>

You say "common denominator" like it's a bad thing. But using commodity
tooling is *good*, not bad, as the recent example of doing something
largely custom with mozreview rather than just taking Pharicator
more-or-less off the shelf demonstrates. The reason to write new tooling is
when you can't get away with commodity tooling.


> But that's just a general observation; if you look at this specific case,
> it might not be much effort to support native git for richer/future try
> pushing. But that's very different from requiring all the tools to support
> native git on an equal basis. And it seems reasonable to evaluate the
> utility of this specific support via a poll, even one known to be biased.
>

I don't think that's true, for the reasons I indicated above. Rather,
there's a policy decision about whether we are going to have Git as a
first-class thing or whether we are going to continue force everyone who
uses Git to fight with inadequate workflows. We know there are plenty of
people who use Git.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-16 Thread Eric Rescorla
On Fri, Sep 15, 2017 at 9:25 PM, Gregory Szorc <g...@mozilla.com> wrote:

> On Fri, Sep 15, 2017 at 8:37 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>>
>>
>> On Fri, Sep 15, 2017 at 8:33 PM, Gregory Szorc <g...@mozilla.com> wrote:
>>
>>> On Fri, Sep 15, 2017 at 7:44 PM, Eric Rescorla <e...@rtfm.com> wrote:
>>>
>>>> What happens if you are using git?
>>>>
>>>
>>> git-cinnabar is already supported.
>>>
>>
>> Supported how? Do I have to have special remote names? Special refs? Or
>> does it mess with my git config?
>>
>
> It "just works." Git doesn't require remotes or tracking refs to do pushes
> and the git-cinnabar `mach try` integration doesn't make meaningful (read:
> outside of reflog) persisted changes to your repo state as part of doing
> its job.
>
>
>>
>>
>>
>>> Vanilla git is TBD. I see bug 1400470 was just filed for that.
>>>
>>
>> Having this fixed seems like a blocker.
>>
>
> Maybe. The policy is to support Git as a VCS tool for Firefox development.
> Previously, git-cinnabar - but not vanilla Git - has counted. e.g.
> git-mozreview and artifact builds require cinnabar. I would strongly prefer
> to maintain the "support Git == git-cinnabar" interpretation because
> otherwise we're looking at supporting N+1 tools (where the +1 for vanilla
> Git is *substantially* more effort than the existing Mercurial +
> git-cinnabar).
>

But the cost of that is imposing cinnabar on everyone who wants to use git.
This is bad for several reasons:

1. Custom tools are bad (incidentally, this is also an argument against the
machization of everything, but that's perhaps for another day)
2. There are a lot more people writing code for Firefox than developing the
internal tools, so in general, costs on those people should be avoided.

Moreover, the trend here is to support native *git*: Phabricator doesn't
require hg (at least it shouldn't; it certainly doesn't when we use it for
NSS) and as we move away form having people do direct pushes to the repo,
then they won't need cinnabar to land code either. So, having try be about
the only place where where you need cinnabar is bad.



> I'd prefer to take a data-driven approach to answering the question of "do
> we need to support vanilla Git." I wish I had local developer metrics or
> results from a recent developer survey to feed into a decision. In lieu of
> that data, I'd encourage users of vanilla Git to make noise on bug 1400470
> if you feel you need `mach try` support.
>

This is not a good decision making procedure. First, requiring people to
volunteer their objections inherently underweights those objections because
it's work to do so. Second, the current situation has incentivized people
to use cinnabar, so the number of vanilla git users is less than it
otherwise would be. But this is bad, for the reasons I listed above. The
right question is: what future do we want to live in, and that's a policy
question, not one decided by taking a (highly biased) poll of what people
currently do.

-Ekr

















>
>
>>
>>
>>>
>>>
>>>>
>>>> On Fri, Sep 15, 2017 at 3:30 PM, Gregory Szorc <g...@mozilla.com> wrote:
>>>>
>>>>> The Try Service ("Try") is a mechanism that allows developers to
>>>>> schedule
>>>>> tasks in automation. The main API for that service is "Try Syntax"
>>>>> (e.g.
>>>>> "try: -b o -p linux -u xpcshell"). And the transport channel for making
>>>>> that API call is a Mercurial changeset's commit message plus a version
>>>>> control "push" to the "try" repository on hg.mozilla.org.
>>>>>
>>>>> As the recent "Reminder on Try usage and infrastructure resources"
>>>>> thread
>>>>> details, scaling Firefox automation - and Try in particular - is
>>>>> challenging. In addition, the number of workflows and variations that
>>>>> automation needs to support is ever increasing and continuously
>>>>> evolving.
>>>>>
>>>>> It shouldn't come as a surprise when I say that we've outgrown many of
>>>>> the
>>>>> implementation details of the Try Service. Try Syntax itself is over 7
>>>>> years old and has survived a complete rewrite of automation
>>>>> scheduling, for
>>>>> example. Aspects of Try are adversely impacting the ability for
>>>>> developers
>>>>> to use Try efficiently and the

Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Eric Rescorla
On Fri, Sep 15, 2017 at 8:33 PM, Gregory Szorc <g...@mozilla.com> wrote:

> On Fri, Sep 15, 2017 at 7:44 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> What happens if you are using git?
>>
>
> git-cinnabar is already supported.
>

Supported how? Do I have to have special remote names? Special refs? Or
does it mess with my git config?



> Vanilla git is TBD. I see bug 1400470 was just filed for that.
>

Having this fixed seems like a blocker.

-Ekr


>
>
>>
>> On Fri, Sep 15, 2017 at 3:30 PM, Gregory Szorc <g...@mozilla.com> wrote:
>>
>>> The Try Service ("Try") is a mechanism that allows developers to schedule
>>> tasks in automation. The main API for that service is "Try Syntax" (e.g.
>>> "try: -b o -p linux -u xpcshell"). And the transport channel for making
>>> that API call is a Mercurial changeset's commit message plus a version
>>> control "push" to the "try" repository on hg.mozilla.org.
>>>
>>> As the recent "Reminder on Try usage and infrastructure resources" thread
>>> details, scaling Firefox automation - and Try in particular - is
>>> challenging. In addition, the number of workflows and variations that
>>> automation needs to support is ever increasing and continuously evolving.
>>>
>>> It shouldn't come as a surprise when I say that we've outgrown many of
>>> the
>>> implementation details of the Try Service. Try Syntax itself is over 7
>>> years old and has survived a complete rewrite of automation scheduling,
>>> for
>>> example. Aspects of Try are adversely impacting the ability for
>>> developers
>>> to use Try efficiently and therefore undermining our collective ability
>>> to
>>> get important work done quickly.
>>>
>>> In order to put ourselves in a position to more easily change
>>> implementation details of the Try Service so we may deliver a better
>>> experience for all, we'd like to require the use of `mach try` for Try
>>> submissions. This will ensure there is a single, well-defined, and
>>> easily-controlled mechanism for submitting to Try. This will allow
>>> greater
>>> flexibility and adaptability. It will provide better opportunities for
>>> user
>>> messaging. It will ensure that any new features are seen by everyone
>>> sooner. It will eventually lead to faster results on Try for everyone.
>>>
>>> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
>>> to submit to Try.
>>>
>>> The mechanism for submitting to Try has remaining relatively stable for
>>> years. `mach try` is relatively new - and I suspect unused by a sizeable
>>> population. This is a potentially highly disruptive transition. That's
>>> why
>>> we're not making it immediately and why we're sending this email today.
>>>
>>> You can mitigate the disruption by using `mach try` before the forced
>>> transition is made and reporting bugs as necessary. Have them block
>>> "require-mach-try" if you need them addressed before the transition or
>>> "mach-try" otherwise. We don't really have a good component for `mach
>>> try`
>>> bugs, so put them in TaskCluster :: Task Configuration for now and chain
>>> them to a tracking bug for visibility.
>>>
>>> FAQ
>>>
>>> Q: When will the transition be made?
>>> A: When we feel `mach try` is usable for important workflows (as judged
>>> by
>>> blockers on "require-mach-try"). Also, probably not before Firefox 57
>>> ships
>>> because we don't want to interfere with that release.
>>>
>>> Q: What about old changesets?
>>> A: You will still be able to submit to Try using the current/legacy
>>> mechanism for old changesets. There will be a "flag day" of sorts on
>>> mozilla-central after which all Try submissions will require `mach try`
>>> or
>>> nothing will get scheduled.
>>>
>>> Q: Will Try Syntax continue to work?
>>> A: For the foreseeable future, yes. There is a long-term goal to replace
>>> Try Syntax with something more automatic and less effort - at least for
>>> most use cases. But there are no definite plans or a timetable to remove
>>> Try Syntax.
>>>
>>> Q: Are there any other major changes planned?
>>> A: Several. People are hacking on path-based selection, `mach try fuzzy`
>>> improvements, 

Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Eric Rescorla
What happens if you are using git?

-Ekr


On Fri, Sep 15, 2017 at 3:30 PM, Gregory Szorc  wrote:

> The Try Service ("Try") is a mechanism that allows developers to schedule
> tasks in automation. The main API for that service is "Try Syntax" (e.g.
> "try: -b o -p linux -u xpcshell"). And the transport channel for making
> that API call is a Mercurial changeset's commit message plus a version
> control "push" to the "try" repository on hg.mozilla.org.
>
> As the recent "Reminder on Try usage and infrastructure resources" thread
> details, scaling Firefox automation - and Try in particular - is
> challenging. In addition, the number of workflows and variations that
> automation needs to support is ever increasing and continuously evolving.
>
> It shouldn't come as a surprise when I say that we've outgrown many of the
> implementation details of the Try Service. Try Syntax itself is over 7
> years old and has survived a complete rewrite of automation scheduling, for
> example. Aspects of Try are adversely impacting the ability for developers
> to use Try efficiently and therefore undermining our collective ability to
> get important work done quickly.
>
> In order to put ourselves in a position to more easily change
> implementation details of the Try Service so we may deliver a better
> experience for all, we'd like to require the use of `mach try` for Try
> submissions. This will ensure there is a single, well-defined, and
> easily-controlled mechanism for submitting to Try. This will allow greater
> flexibility and adaptability. It will provide better opportunities for user
> messaging. It will ensure that any new features are seen by everyone
> sooner. It will eventually lead to faster results on Try for everyone.
>
> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
> to submit to Try.
>
> The mechanism for submitting to Try has remaining relatively stable for
> years. `mach try` is relatively new - and I suspect unused by a sizeable
> population. This is a potentially highly disruptive transition. That's why
> we're not making it immediately and why we're sending this email today.
>
> You can mitigate the disruption by using `mach try` before the forced
> transition is made and reporting bugs as necessary. Have them block
> "require-mach-try" if you need them addressed before the transition or
> "mach-try" otherwise. We don't really have a good component for `mach try`
> bugs, so put them in TaskCluster :: Task Configuration for now and chain
> them to a tracking bug for visibility.
>
> FAQ
>
> Q: When will the transition be made?
> A: When we feel `mach try` is usable for important workflows (as judged by
> blockers on "require-mach-try"). Also, probably not before Firefox 57 ships
> because we don't want to interfere with that release.
>
> Q: What about old changesets?
> A: You will still be able to submit to Try using the current/legacy
> mechanism for old changesets. There will be a "flag day" of sorts on
> mozilla-central after which all Try submissions will require `mach try` or
> nothing will get scheduled.
>
> Q: Will Try Syntax continue to work?
> A: For the foreseeable future, yes. There is a long-term goal to replace
> Try Syntax with something more automatic and less effort - at least for
> most use cases. But there are no definite plans or a timetable to remove
> Try Syntax.
>
> Q: Are there any other major changes planned?
> A: Several. People are hacking on path-based selection, `mach try fuzzy`
> improvements, moz.build-based annotations influencing what gets scheduled,
> not using a traditional Mercurial repository for backing Try, and more.
> Some of these may not be available to legacy Try submission workflows,
> giving you additional reasons to adopt `mach try` sooner.
>
> Q: Should I be worried about this transition negatively impacting my
> workflow?
> A: As long as you file bugs blocking "require-mach-try" to make it known
> why `mach try` doesn't work for you, your voice will be heard and hopefully
> acted on. So as long as you file bugs, you shouldn't need to worry.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: `else for` or `else { for... }`?

2017-08-30 Thread Eric Rescorla
On Wed, Aug 30, 2017 at 4:41 PM, Jeff Gilbert  wrote:

> IMO: Never else-for. (or else-while)
>
> Else-if is a reasonable continuation of concept: "Well it wasn't that,
> what if it's this instead?"
> Else-for is just shorthand for "well it wasn't that, so let's loop
> over something".
>
> Else-if is generally used for chaining, often effectively as an
> advanced switch/match statement.
>
> I've never actually seen else-for, but I imagine it would be used for:
> if (!foo) {
> } else for (i = 0; i < foo->bar; i++) {
> }
>
> I don't think this pattern has enough value to be acceptable as
> shorthand. BUT, I didn't know it was a thing until now, so I'm
> naturally biased against it.
>

I'm with Jeff here. Not even once.

if (!foo) {
} else {
  for(...) {

}
}

As it happens, the style guide agrees with us, so I'm happy to quote it ;)

"else should only ever be followed by { or if; i.e., other control keywords
are not allowed and should be placed inside braces."

-Ekr





-Ekr


> On Wed, Aug 30, 2017 at 2:58 PM,   wrote:
> > Let's keep the flames alive!
> >
> > Should we always put braces after an `else`, with the only exception
> being another `if`?
> > Or should we also have exceptions for the other control structures
> (while, do, for, switch)?
> >
> > A.
> > if (...) {
> >   ...
> > } else {
> >   for (...) {
> > ...
> >   }
> > }
> >
> > B.
> > if (...) {
> >   ...
> > } else for (...) {
> >   ...
> > }
> >
> > I can see arguments for both, so I'm not too sure which one
> should win. :-)
> > WDYT?
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-30 Thread Eric Rescorla
On Wed, Aug 30, 2017 at 3:55 PM, Michael Smith  wrote:

> Hi everyone,
>
> Mozilla DevTools is exploring implementing parts of the Chrome DevTools
> Protocol ("CDP") [0] in Firefox. This is an HTTP, WebSockets, and JSON
> based protocol for automating and inspecting running browser pages.
>
> Originally built for the Chrome DevTools, it has seen wider adoption with
> outside developers. In addition to Chrome/Chromium, the CDP is supported by
> WebKit, Safari, Node.js, and soon Edge, and an ecosystem of libraries and
> tools already exists which plug into it, for debugging, extracting
> performance data, providing live-preview functionality like the Brackets
> editor, and so on. We believe it would be beneficial if these could be
> leveraged with Firefox as well.
>
> The initial implementation we have in mind is an alternate target for
> third-party integrations to connect to, in addition to the existing Firefox
> DevTools Server. The Servo project has also expressed interest in adding
> CDP support to improve its own devtools story, and a PR is in flight to
> land a CDP server implementation there [1].
>
> I've been working on this project with guidance from Jim Blandy. We've
> come up with the following approach:
>
> - A complete, typed Rust implementation of the CDP protocol messages and
> (de)serialization lives in the "cdp" crate [2], automatically generated
> from the protocol's JSON specification [3] using a build script (this
> happens transparently as part of the normal Cargo compilation process).
> This comes with Rustdoc API documentation of all messages/types in the
> protocol [4] including textual descriptions bundled with the specification
> JSON. The cdp crate will likely track the Chrome stable release for which
> version of the protocol is supported. A maintainers' script exists which
> can find and fetch the appropriate JSON [5].
>
> - The "tokio-cdp" crate [6] builds on the types and (de)serialization
> implementation in the cdp crate to provide a server implementation built on
> the Tokio asynchronous I/O system. The server side provides traits for
> consuming incoming CDP RPC commands, executing them concurrently and
> sending back responses, and simultaneously pushing events to the client.
> They are generic over the underlying transport, so the same backend
> implementation could provide support for "remote" clients plugging in over
> HTTP/WebSockets/JSON or, for example, a browser-local client communicating
> over IPDL.
>
> - In Servo, a new component plugs into the cdp and tokio-cdp crates and
> acts on behalf of connected CDP clients in response to their commands,
> communicating with the rest of the Servo constellation. This server is
> disabled by default and can be started by passing a "--cdp" flag to the
> Servo binary, binding a TCP listener to the loopback interface at the
> standard CDP port 9222 (a different port can be specified as an option to
> the flag).
>
> - The implementation we envision in Firefox/Gecko would act similarly: a
> new Rust component, disabled by default and switched on via a command line
> flag, which binds to a local port and mediates between Gecko internals and
> clients connected via tokio-cdp.
>
> We chose to build this on Rust and the Tokio event loop, along with the
> hyper HTTP library and rust-websocket which plug into Tokio.
>
> Rust and Cargo provide excellent facilities for compile-time code
> generation which integrate transparently into the normal build process,
> avoiding the need to invoke scripts by hand to keep generated artifacts in
> sync. The Rust ecosystem provides libraries such as quote [7] and serde [8]
> which allow us to auto-generate an efficient, typed, and self-contained
> interface for the entire protocol. This moves the complexity of ingesting,
> validating, and extracting information from client messages out of the
> Servo- and Gecko-specific backend implementations, helps to ensure they
> conform correctly to the protocol specification, and provides a structured
> way of upgrading to new protocol versions.
>
> As for Tokio, the event loop and Futures-based model of concurrency it
> offers maps well to the Chrome DevTools Protocol. RPC commands typically
> execute simultaneously, returning responses in order of completion, while
> the server continuously generates events to which the client has
> subscribed. Under Tokio we can spawn multiple lightweight Tasks, dispatch
> messages to them, and multiplex their responses back over the single client
> connection. The Tokio event loop is nicely self-contained to the one or,
> optionally, more threads it is allocated, so the rest of the application
> doesn't need to be aware of it.
>
> Use of Tokio is becoming a standard in the Rust ecosystem---it's worth
> mentioning that Mozilla funds Tokio development [9] and employs some of its
> primary developers. Servo currently depends on an older version of the
> hyper HTTP client/server library, and consequently 

Re: Coding style: Argument alignment

2017-08-30 Thread Eric Rescorla
On Wed, Aug 30, 2017 at 9:29 AM, Sylvestre Ledru <sle...@mozilla.com> wrote:

> Le 30/08/2017 à 17:25, Eric Rescorla a écrit :
>
>
>
> On Wed, Aug 30, 2017 at 1:21 AM, Sylvestre Ledru <sle...@mozilla.com>
> wrote:
>
>>
>> Le 30/08/2017 à 08:53, Henri Sivonen a écrit :
>> > Regardless of the outcome of this particular style issue, where are we
>> > in terms of clang-formatting all the non-third-party C++ in the tree?
>>
>> We have been working on that but we delayed it to avoid doing it during
>> the 57 work.
>>
>> We will share more news about that soon.
>>
>
> This is probably obvious, but there are chunks of non-third-party C++ in
> the tree that follow other styles (e.g., media/mtransport is Google style
> guide style) so please be careful not to reformat those.
>
> Yeah, I have been maintaining this list:
> https://dxr.mozilla.org/mozilla-central/source/.clang-format-ignore
> (partially regenerated from https://dxr.mozilla.org/
> mozilla-central/source/tools/rewriting/ThirdPartyPaths.txt)
>
>
I note that this doesn't seem to include media/mtransport/*.

-Ekr

Sylvestre
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Argument alignment

2017-08-30 Thread Eric Rescorla
On Wed, Aug 30, 2017 at 1:21 AM, Sylvestre Ledru  wrote:

>
> Le 30/08/2017 à 08:53, Henri Sivonen a écrit :
> > Regardless of the outcome of this particular style issue, where are we
> > in terms of clang-formatting all the non-third-party C++ in the tree?
>
> We have been working on that but we delayed it to avoid doing it during
> the 57 work.
>
> We will share more news about that soon.
>

This is probably obvious, but there are chunks of non-third-party C++ in
the tree that follow other styles (e.g., media/mtransport is Google style
guide style) so please be careful not to reformat those.

-Ekr


> > I've had a couple of cases of late where the initializers in a
> > pre-existing constructor didn't follow our style, so when I changed
> > the list a tiny bit, the post-clang-format patch showed the whole list
> > as changed (due to any change to the list triggering reformatting the
> > whole thing to our style). I think it would be better for productivity
> > not to have to explain artifacts of clang-format during review, and at
> > this point the way to avoid it would be to make sure the base revision
> > is already clang-formatted.
> >
> Could you report a bug? We wrote a few patches upstream to improve
> the support of our coding style.
>
>
> Thanks,
> Sylvestre
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: WebVR Working Group

2017-08-18 Thread Eric Rescorla
LGTM

-Ekr


On Fri, Aug 18, 2017 at 2:59 PM, L. David Baron <dba...@dbaron.org> wrote:

> OK, here's a draft of Tantek's points in a form that I think we
> could submit.  Please let me know if there are things you think I
> should change:
>
> -
> We support the idea of bringing WebVR into a working group at the W3C.
>
> However, bringing work that has been incubating in a community group (CG)
> into a working group (WG) requires more interaction with the existing CG
> than has happened here.  While we are aware that not all members of the CG
> support moving the work into a WG, we would like to see the process of
> developing a WG charter involve the existing CG more, and try to find an
> acceptable compromise that allows the formation of a WG.
>
> We're objecting because we believe this charter should be redrafted in a
> dialog with the existing Co
> mmunity Group, in order to build consensus there on the scope of the
> working group, its relationship to the community group, and other details.
> -
>
> -David
>
> On Friday 2017-08-18 10:17 -0500, Lars Bergstrom wrote:
> > Thanks, Tantek! I like this response. I have not been able to reach
> > google/microsoft but will inform them of this intention.
> >
> > To reinforce point #1, I'd add that WebVR is currently under TAG review
> > (see https://github.com/w3ctag/design-reviews/issues/185 ).
> Standardization
> > is definitely the intended path here.
> > - Lars
> >
> > On Thu, Aug 17, 2017 at 5:33 PM, Tantek Çelik <tan...@cs.stanford.edu>
> > wrote:
> >
> > > Given that we have a day left to respond to this poll, we should begin
> > > writing up at least a draft answer with known facts that we can
> > > iterate on as we get more information.
> > >
> > > Rough draft WebVR proposed charter response points for consideration:
> > >
> > >
> > > 1. Timing is good. We think WebVR is ready for a WG to formally
> > > standardize it.
> > >
> > > [Our very action of shipping a WebVR feature publicly (without pref)
> > > speaks louder than any words on any email lists (including this one)
> > > and communicates that we think WebVR is ready for adoption on the open
> > > web (if that were not true, we should not be shipping it publicly, but
> > > my understanding is that that decision has been made.), and thus ready
> > > for rapid standardization among implementers.]
> > >
> > > 2. WG charter details bad. We have strong concerns about the proposed
> > > WG charter as written, including apparent disconnects with the CG, and
> > > in particular failure to involve  implementers (e.g. browser vendors
> > > and major hardware providers).
> > >
> > > 3. Conclusion: Formal objection. Charter bad, needs to be withdrawn,
> > > be rewritten in an open dialog with the CG, such that there is at
> > > least rough consensus with the CG on scope, chairs, and other details.
> > >
> > >
> > > I believe these points reflect our actions and what Lars has
> communicated
> > > below:
> > >
> > > On Thu, Aug 17, 2017 at 9:14 AM, Lars Bergstrom <larsb...@mozilla.com>
> > > wrote:
> > > > I'll follow up more with the chairs of the community group (they just
> > > had a
> > > > face to face earlier this week and I presume it came up). The last
> bit
> > > that
> > > > I heard is consistent with what Dan mentioned -  the concern is not
> > > around
> > > > standardization
> > >
> > > Thanks for the clarification, thus point 1.
> > >
> > > > but that neither the chairs nor the browser vendors nor the
> > > > major hardware providers were consulted publicly in the creation of a
> > > > proposal to transition to a working group:
> > > > https://lists.w3.org/Archives/Public/public-webvr/2017Jul/0083.html
> > >
> > > Thus point 2.
> > >
> > > > Based on that thread, I'd expect the proposal to be withdrawn or -
> as Dan
> > > > mentioned - things adjusted to involve the the current spec
> contributors.
> > >
> > > Thus point 3 - we should openly advocate for the proposed charter to
> > > be withdrawn and rewritten accordingly.
> > >
> > >
> > > > I'll try to get on the phone with folks to find out more and get
> > > something
> > > > to dbaron by tomorrow. I'm not familiar with the inner workings of
> the
> > > W3C,
> > > > but I find it hard to 

Re: Proposed W3C Charter: WebVR Working Group

2017-08-16 Thread Eric Rescorla
On Wed, Aug 16, 2017 at 5:18 PM, Daniel Veditz  wrote:

> On Wed, Aug 16, 2017 at 3:51 PM, L. David Baron  wrote:
>
> > I still think opposing this charter because the group should still
> > be in the incubation phase would be inconsistent with our shipping
> > and promotion of WebVR.
> >
>
> ​I agree that would be exceptionally odd and require a well reasoned
> argument about why formal standardization was inappropriate.
>

This puzzles me as well. Lars, can you explain what the argument against
standardization of a shipping feature is?

-Ekr


>
> I'm troubled that the members of the incubation group seem to feel that
> chairs are being imposed on them who have been less involved (or
> uninvolved?) with leading the feature to the point it's gotten so far. But
> I don't understand the politics of that or whether we could or should get
> involved on that point.
>
> -Dan Veditz
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Restricting the Notifications API to secure contexts

2017-08-07 Thread Eric Rescorla
This seems fine.

-Ekr


On Mon, Aug 7, 2017 at 6:45 AM, Anne van Kesteren  wrote:

> Chrome wants to restrict the Notifications API
> https://notifications.spec.whatwg.org/ to secure contexts:
>
>   https://github.com/whatwg/notifications/issues/93
>   https://github.com/w3c/web-platform-tests/pull/6596
>
> Given that the API involves prompting the user as well as a permission
> that remains stored across different networks it seems like a good
> idea to restrict this API to HTTPS.
>
> I was wondering if anyone has concerns with restricting the API as
> such. If there are no concerns within a week I'll let Chrome go ahead
> with the change to the standard and tests and file the necessary
> implementation bugs against the remaining browsers, including Firefox.
>
>
> --
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-25 Thread Eric Rescorla
I'd like to second Ehsan's point, but also expand upon it into a more
general observation.

As it becomes progressively more difficult to build Firefox without mach,
it becomes
increasingly important that mach respect people's workflows. For those of
us who
were comfortable with make and the behavior of having relatively unfiltered
access
to what the build system is doing, it would be great if mach could have
some set of
flags to preserve that view (cf. the flags to remove the timestamps so that
emacs
compiler mode works).

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Race Cache With Network experiment on Nightly

2017-05-24 Thread Eric Rescorla
What's the state of pref experiments? I thought they were not yet ready.

-Ekr


On Thu, May 25, 2017 at 7:15 AM, Benjamin Smedberg 
wrote:

> Is there a particular reason this is landing directly to nightly rather
> than using a pref experiment? A pref experiment is going to provide much
> more reliable comparative data. In general we're pushing everyone to use
> controlled experiments for nightly instead of landing experimental work
> directly.
>
> --BDS
>
> On Wed, May 24, 2017 at 11:36 AM, Valentin Gosu 
> wrote:
>
> > As part of the Quantum Network initiative we are working on a project
> > called "Race Cache With Network" (rcwn) [1].
> >
> > This project changes the way the network cache works. When we detect that
> > disk IO may be slow, we send a network request in parallel, and we use
> the
> > first response that comes back. For users with slow spinning disks and a
> > low latency network, the result would be faster loads.
> >
> > This feature is currently preffed off - network.http.rcwn.enabled
> > In bug 1366224, which is about to land on m-c, we plan to enable it on
> > nightly for one or two days, to get some useful telemetry for our future
> > work.
> >
> > For any crashes or unexpected behaviour, please file bugs blocking
> 1307504.
> >
> > Thanks!
> >
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=rcwn
> > [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1366224
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-20 Thread Eric Rescorla
On Sat, May 20, 2017 at 1:16 PM, Kris Maglione 
wrote:

> On Sat, May 20, 2017 at 08:36:13PM +1000, Martin Thomson wrote:
>
>> On Sat, May 20, 2017 at 4:55 AM, Kris Maglione 
>> wrote:
>>
>>> Can we make some effort to get clean warnings output at the end of
>>> standard
>>> builds? A huge chunk of the warnings come from NSS and NSPR, and should
>>> be
>>> easily fixable.
>>>
>>
>> Hmm, these are all -Wsign-compare issues bar one, which is fixed
>> upstream.  We have an open bug tracking the warnings
>> (https://bugzilla.mozilla.org/show_bug.cgi?id=1307958 and specifically
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1212199 for NSS).
>>
>> NSS is built with -Werror separately, but disables errors on
>> -Wsign-compare.  Disabling those warnings for a Firefox build of NSS
>> wouldn't be so bad now that we share gyp config.  Based on a recent
>> build, that's 139 messages (add 36 if you want to add nspr).
>>
>
> Is there a particularly good reason that NSS needs to disable
> -Wsign-compare? That seems like a footgun waiting to happen, especially in
> crypto code.


Like many other pieces of old code, there are a lot of things that trigger
compiler warnings which we have been progressively removing, but we
haven't removed these yet. Ideally we would have some tooling that
would distinguish new warnings from old warnings, but absent that,
until we fix all the existing warnings it's either disable -Werror or
disable this particular warning. It's not something we're particularly
happy about.

-Ekr

>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread Eric Rescorla
As Henri indicates, I think the use of references is consistent with
the style guide.

It's also worth noting that if you are using boxed pointers, then you
almost certainly want to use references to pass them around. I.e.,

foo(const RefPtr& mPtr);   // avoids useless ref count
foo(const UniquePtr& mPtr); // avoids attempted (and failed)
ownership transfer.


-Ekr

P.S. The Google style guide prohibits non-const references. We follow this
in the parts of the code where we follow that style (e.g., the NSS
gtests).
https://google.github.io/styleguide/cppguide.html#Reference_Arguments


On Tue, May 9, 2017 at 3:31 AM, Henri Sivonen  wrote:

> On Tue, May 9, 2017 at 12:58 PM, Emilio Cobos Álvarez 
> wrote:
> > I think references help to encode that a bit more in the type system,
> > and help reasoning about the code without having to look at the
> > implementation of the function you're calling into, or without having to
> > rely on the callers to know that you expect a non-null argument.
> >
> > Personally, I don't think that the fact that they're not used as much as
> > they could/should is a good argument to prevent their usage, but I don't
> > know what's the general opinion on that.
>
> The relevant bit of the Core Guidelines is
> https://github.com/isocpp/CppCoreGuidelines/blob/master/
> CppCoreGuidelines.md#Rf-ptr-ref
> and says:
> "A pointer (T*) can be a nullptr and a reference (T&) cannot, there is
> no valid "null reference". Sometimes having nullptr as an alternative
> to indicated "no object" is useful, but if it is not, a reference is
> notationally simpler and might yield better code."
>
> As a result, I have an in-flight patch that takes T& instead of
> NotNull where applicable, even though I do use NotNull to
> annotate return values.
>
> I agree that in principle it makes sense to use the type system
> instead of relying on partial debug-build run-time measures to denote
> non-null arguments when possible. That said, having to dereference a
> smart pointer with prefix * in order to pass its referent to a method
> that takes a T& argument feels a bit odd when one is logically
> thinking of passing a pointer still, but then, again, &*foo seems like
> common pattern on the Rust side of FFI to make a reference out of a
> pointer and effectively asserting to the human reader that the pointer
> is null.
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-28 Thread Eric Rescorla
On Thu, Apr 27, 2017 at 11:02 PM, Frederik Braun  wrote:

> On 28.04.2017 05:56, Ehsan Akhgari wrote:
> > On 04/27/2017 08:09 AM, Frederik Braun wrote:
> >> On 27.04.2017 13:56, smaug wrote:
> >>> On 04/25/2017 04:38 PM, Ehsan Akhgari wrote:
>  On 04/24/2017 06:04 PM, Martin Thomson wrote:
> > I think that 60Hz is too high a rate for this.
> >
> > I suggest that we restrict this to top-level, foreground, and secure
> > contexts.  Note that foreground is a necessary precondition for the
> > attack, so that restriction doesn't really help here.  Critically,
> > rate limit access much more than the 60Hz recommended for the
> > accelerometer.  5Hz might be sufficient here, maybe even lower.
>  How does restricting this to secure contexts help?
> >>> This is a good point to remember in this kinds of attacks. So often has
> >>> use of secure context suggested as the way to
> >>> fix issues, when it often doesn't really help at all with the core
> >>> issue.
> >>>
> >>> And the initial email did have
> >>> "Unshipping for non-secure context and making it HTTPS-only wouldn't
> >>> address the attack."
> >>>
> >> While it does not address the attack, it should be limited to secure
> >> context, if we keep the API. It's actually in the spec.
> > Why is that an advantage?  Any attacker can use a secure context. The
> > word "secure" here relates to the security of the transport layer, but
> > if the origin itself is untrusted (which it is) exposing an unsafe
> > functionality to a "secure" context is just as unsafe.
> >
> > (And on the practical side of things, any attacker can use a free or
> > paid CA service to deliver their attack code through a secure channel.)
> >
>
> Yes, yes and yes. This is not about the attack at all.
> This is about powerful features using secure contexts.
>

I realize that a number of people like this "powerful features" concept,
but I
don't really think it's that useful a frame, for precisely the reasons on
display
here, nor is it really the consensus inside Mozilla. Rather, the policy is
that we will move to requiring all new features to have HTTPS [0].

I agree that the specification specifically says that this feature must only
be exposed in a secure context, and so we should conform to that (or
just remove the feature) but I don't think the "powerful features" argument
is particularly persuasive in that decision.

-Ekr

[0]
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-26 Thread Eric Rescorla
On Wed, Apr 26, 2017 at 2:01 AM, Gervase Markham <g...@mozilla.org> wrote:

> On 25/04/17 16:46, Eric Rescorla wrote:
> > This suggests that maybe we could just turn it off
>
> It would be sad to remove a capability from the web platform which
> native apps have.


I'm not sure why it would be particularly sad if almost nobody uses it.


Surely we can avoid this problem without being so
> drastic?


Perhaps, but actually designing such security measures is expensive, so
absent some argument that this is in wide use, probably doesn't
pass a cost/benefit test.

-Ekr


> Is it right that one key use of this sensor is to see if the
> person has the phone up against their face, right? If so, reducing to a
> small number of quantized levels would do the trick.
>
> Gerv
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-25 Thread Eric Rescorla
On Tue, Apr 25, 2017 at 5:26 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> So the risk is not that high since if the image is not protected I can get
> it and do evil things without requiring the Light Sensor API. Isn't it?
>

Generally, we take cross-origin information theft pretty seriously.

-Ekr



> On Wed, Apr 26, 2017 at 1:30 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>>
>>
>> On Tue, Apr 25, 2017 at 3:40 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>>> The article says:
>>>
>>> Embed an image from the attacked domain; generally this will be a
>>> resource
>>> > which varies for different authenticated users such as the logged-in
>>> user’s
>>> > avatar or a security code.
>>> >
>>>
>>> And then refers all the steps to this image (binarizing, expand and
>>> measure
>>> per pixel) but, If I can embed that image, it is because I know the URL
>>> for
>>> it and the proper auth tokens if it is protected. In that case, why to
>>> not
>>> simply steal the image?
>>>
>>
>> The simple version of this is that the image is cookie protected.
>>
>> -Ekr
>>
>>
>>> On Wed, Apr 26, 2017 at 12:23 AM, Jonathan Kingston <j...@mozilla.com>
>>> wrote:
>>>
>>> > Auth related images are the attack vector, that and history attacks on
>>> > same domain.
>>> >
>>> > On Tue, Apr 25, 2017 at 11:17 PM, Salvador de la Puente <
>>> > sdelapue...@mozilla.com> wrote:
>>> >
>>> >> Sorry for my ignorance but, in the case of Stealing cross-origin
>>> >> resources,
>>> >> I don't get the point of the attack. If have the ability to embed the
>>> >> image
>>> >> in step 1, why to not simply send this to evil.com for further
>>> >> processing?
>>> >> How it is possible for evil.com to get access to protected resources?
>>> >>
>>> >> On Tue, Apr 25, 2017 at 8:04 PM, Ehsan Akhgari <
>>> ehsan.akhg...@gmail.com>
>>> >> wrote:
>>> >>
>>> >> > On 04/25/2017 10:25 AM, Andrew Overholt wrote:
>>> >> >
>>> >> >> On Tue, Apr 25, 2017 at 9:35 AM, Eric Rescorla <e...@rtfm.com>
>>> wrote:
>>> >> >>
>>> >> >> Going back to Jonathan's (I think) question. Does anyone use this
>>> at
>>> >> all
>>> >> >>> in
>>> >> >>> the field?
>>> >> >>>
>>> >> >>> Chrome's usage metrics say <= 0.0001% of page loads:
>>> >> >> https://www.chromestatus.com/metrics/feature/popularity#Ambi
>>> >> >> entLightSensorConstructor.
>>> >> >>
>>> >> >
>>> >> > This is the new version of the spec which we don't ship.
>>> >> >
>>> >> >
>>> >> > We are going to collect telemetry in
>>> >> >> https://bugzilla.mozilla.org/show_bug.cgi?id=1359124.
>>> >> >> ___
>>> >> >> dev-platform mailing list
>>> >> >> dev-platform@lists.mozilla.org
>>> >> >> https://lists.mozilla.org/listinfo/dev-platform
>>> >> >>
>>> >> >
>>> >> > ___
>>> >> > dev-platform mailing list
>>> >> > dev-platform@lists.mozilla.org
>>> >> > https://lists.mozilla.org/listinfo/dev-platform
>>> >> >
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> 
>>> >> ___
>>> >> dev-platform mailing list
>>> >> dev-platform@lists.mozilla.org
>>> >> https://lists.mozilla.org/listinfo/dev-platform
>>> >>
>>> >
>>> >
>>>
>>>
>>> --
>>> 
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
>
> --
> 
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-25 Thread Eric Rescorla
This suggests that maybe we could just turn it off

On Tue, Apr 25, 2017 at 7:25 AM, Andrew Overholt <overh...@mozilla.com>
wrote:

> On Tue, Apr 25, 2017 at 9:35 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> Going back to Jonathan's (I think) question. Does anyone use this at all
>> in
>> the field?
>>
>
> Chrome's usage metrics say <= 0.0001% of page loads:
> https://www.chromestatus.com/metrics/feature/popularity#
> AmbientLightSensorConstructor. We are going to collect telemetry in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1359124.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-25 Thread Eric Rescorla
Going back to Jonathan's (I think) question. Does anyone use this at all in
the field?

-Ekr


On Tue, Apr 25, 2017 at 6:10 AM, Kurt Roeckx  wrote:

> On 2017-04-25 00:04, Martin Thomson wrote:
> > I think that 60Hz is too high a rate for this.
> >
> > I suggest that we restrict this to top-level, foreground, and secure
> > contexts.  Note that foreground is a necessary precondition for the
> > attack, so that restriction doesn't really help here.  Critically,
> > rate limit access much more than the 60Hz recommended for the
> > accelerometer.  5Hz might be sufficient here, maybe even lower.
>
> Note that they already talk about 2Hz being the rate they think is
> realistic to do their attack, and that 5Hz is probably an upper bound of
> their attack, so reducing it from 60 to 5 doesn't actually change anything
> and you would need to go even lower. You could for instance do something
> like only allowing it 1 time per minute, and require user approval for
> higher frequencies.
>
> The other suggestion they have in their paper is to reduce the number of
> values you return, say 4 different values.
>
>
> Kurt
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Quantum Flow Engineering Newsletter #5

2017-04-18 Thread Eric Rescorla
On Tue, Apr 18, 2017 at 4:19 AM, Jack Moffitt  wrote:

> > Another really nice effort that is starting to unfold and I'm super
> excited
> > about is the new Photon performance project
> > , which is a
> focused
> > effort on the front-end performance.  This includes everything from
> > engineering the new UI with things like animations running on the
> > compositor in mind from the get-go, being laser focused on guaranteeing
> > good performance on key UI interactions such as tab opening and closing,
> > and lots of focused measurements and fixes to the browser front-end.
>
> I think the order of operations here is unfortunate. What we'd prefer
> is that WebRender land first, and then Photon be designed around what
> is possible in the GPU-backed world.


When will that landing be? How much time will that leave for designing
photon?

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling Pointer Events in Firefox (desktop) Nightly on Mac and Linux

2017-04-06 Thread Eric Rescorla
On Thu, Apr 6, 2017 at 5:26 AM, Ehsan Akhgari 
wrote:

> On Thu, Apr 6, 2017 at 12:57 AM, L. David Baron  wrote:
>
> > On Thursday 2017-04-06 00:33 -0400, Ehsan Akhgari wrote:
> > > In general, I should also say that designing features with
> > > fingerprinting in mind is *extremely* difficult and takes a lot of
> > > effort on the part of all browser vendors, which would be difficult to
> > > do effectively without some broad agreement that the extra effort spent
> > > is worth it.  WHATWG (in HTML at least) mostly treats this by
> > > documenting the exposed vectors
> > >  > fingerprinting-vector>.
> > >  I wonder what the position of the W3C TAG is?
> >
> > That's actually a pretty easy question to answer:
> > https://www.w3.org/2001/tag/doc/unsanctioned-tracking/
> > (Unsanctioned Web Tracking, W3C TAG Finding 17 July 2015)
> >
>
> Oh, right.  Thanks for the link.  (Now I remember that I had read this, and
> forgotten it!)
>
> Given <
> https://www.w3.org/2001/tag/doc/unsanctioned-tracking/#
> limitations-of-technical-solutions>
> and the previous links I posted, what _is_ Mozilla's official's policy
> towards fingerprinting?  In reality we do act as if we believe that it is
> untenable to protect against it purely by restricting new Web features at
> this point, so if this isn't our official policy it would be useful to have
> that conversation explicitly.  If it isn't, we shouldn't be holding people
> to a higher bar than respecting privacy.resistsFingerprinting (or where we
> really place that bar.  :-)
>

I don't know if we have an official policy, but this roughly reflects my
view, with the caveat that we shouldn't add functions which have an
extremely high ratio of fingerprinting to usefulness, but I don't have
a clear test for where that line is.

-Ekr


>
> Cheers,
> --
> Ehsan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revocation protocol idea

2017-03-31 Thread Eric Rescorla
On Fri, Mar 31, 2017 at 4:20 AM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi Eric
>
> On Wed, Mar 22, 2017 at 6:11 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> There seem to be three basic ideas here:
>>
>> 0. Blacklisting at the level of API rather than site.
>> 1. Some centralized but democratic  mechanism for building a list of
>> misbehaving sites.
>> 2. A mechanism for distributing the list of misbehaving sites to clients.
>>
>
> I think I did not explain it well. It would be a black list on site level
> and it would not be centralised but distributed.
>

I had understood your point that it would be centrally organized but
democratically decided and then
distributed to users.


The idea is that is a site is harmful for the user, all their permissions
> should be revoked and we shuold communicate the user why this site is
> harmful. The list of misbehaving sites, the reasons of why them are
> dangerous and the evidence supporting misbehaving should be in a
> cross-browser distrubuted DB.
>

Yes, I understood that.


As Jonathan notes, Firefox already has a mechanism for doing #2, which is
>> to say
>> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
>> bad, but
>> specific APIs aren't disabled, but it's easy to see how you would extend
>> it to that
>> if you actually wanted to provide that function. I'm not sure that's
>> actually
>> very attractive--it's hard enough for users to understand safe browsing.
>> Safe
>> Browsing is of course centralized, but that comes with a number of
>> advantages
>> and it's not clear what the advantage of decentralized blacklist
>> dissemination
>> is, given the networking realities.
>>
>> You posit a mechanism for forming the list of misbehaving sites, but
>> distributed
>> reputation is really hard, and it's not clear that Google is actually
>> doing a bad
>> job of running Safe Browsing, so given that this is a fairly major
>> unsolved problem,
>> I'd be reluctant to set out to build a mechanism like this without a
>> pretty clear
>> design.
>>
>
> I've been looking at this paper on prediction markets based on BitCoin
> <http://bravenewcoin.com/assets/Whitepapers/Augur-A-Decentralized-Open-Source-Platform-for-Prediction-Markets.pdf>
> for inspiration. It is true that distributed reputation is a hard problem
> but I think we could adapt the concepts on that paper to this scenario if
> reviewers "bet" on site reputation and there is some incentive. Of course,
> further research is needed to mitigate the chance for a reviewer to lie in
> its report and prevent forms of Sybil attack but it seems to be some
> solutions out there.
>

As I said, we have mechanisms that seem to me to be doing a fairly adequate
job at this general class of problem, with the major drawback being that
they are centralized. I'm not saying that that couldn't be addressed, but
it doesn't seem like such a solution is ready to hand. As such, this seems
like it is fairly far outside the realm of anything Firefox would do in in
the short to medium term.

-Ekr


>
>>
>> -Ekr
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>>> Hi Jonathan
>>>
>>> In the short and medium terms, it scales better than a white list and
>>
>> distributes the effort of finding APIs misuses. Mozilla and other vendor
>>> browser could still review the code of the site and add its vote in
>>> favour
>>> or against the Web property.
>>>
>>> In the long term, the system would help finding new security threats
>>> such a
>>> tracking or fingerprinting algorithms by encouraging the honest report of
>>> evidences, somehow.
>>>
>>> With this system, the threat is considered the result of both potential
>>> risk and chances of actual misuse. The revocation protocol reduces
>>> threatening situations by minimising the number of Web properties abusing
>>> the APIs.
>>>
>>> As a side effect, it provides the infrastructure for a real distributed
>>> and
>>> cross browser database which can be of utility for other unforeseen uses.
>>>
>>> What do you think?
>>>
>>>
>>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" <jkings...@mozilla.com>
>>> escribió:
>>>
>>> Hey,
>>> What would be the advantage of using this over the safesite list?
>>> Obviously

Re: Revocation protocol idea

2017-03-21 Thread Eric Rescorla
There seem to be three basic ideas here:

0. Blacklisting at the level of API rather than site.
1. Some centralized but democratic  mechanism for building a list of
misbehaving sites.
2. A mechanism for distributing the list of misbehaving sites to clients.

As Jonathan notes, Firefox already has a mechanism for doing #2, which is
to say
"Safe Browsing". Now, Safe Browsing is binary, either a site is good or
bad, but
specific APIs aren't disabled, but it's easy to see how you would extend it
to that
if you actually wanted to provide that function. I'm not sure that's
actually
very attractive--it's hard enough for users to understand safe browsing.
Safe
Browsing is of course centralized, but that comes with a number of
advantages
and it's not clear what the advantage of decentralized blacklist
dissemination
is, given the networking realities.

You posit a mechanism for forming the list of misbehaving sites, but
distributed
reputation is really hard, and it's not clear that Google is actually doing
a bad
job of running Safe Browsing, so given that this is a fairly major unsolved
problem,
I'd be reluctant to set out to build a mechanism like this without a pretty
clear
design.

-Ekr







On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi Jonathan
>
> In the short and medium terms, it scales better than a white list and

distributes the effort of finding APIs misuses. Mozilla and other vendor
> browser could still review the code of the site and add its vote in favour
> or against the Web property.
>
> In the long term, the system would help finding new security threats such a
> tracking or fingerprinting algorithms by encouraging the honest report of
> evidences, somehow.
>
> With this system, the threat is considered the result of both potential
> risk and chances of actual misuse. The revocation protocol reduces
> threatening situations by minimising the number of Web properties abusing
> the APIs.
>
> As a side effect, it provides the infrastructure for a real distributed and
> cross browser database which can be of utility for other unforeseen uses.
>
> What do you think?
>
>
> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
> escribió:
>
> Hey,
> What would be the advantage of using this over the safesite list? Obviously
> there would be less broken sites on the web as we would be permitting the
> site to still be viewed by the user rather than just revoking the
> permission but are there other advantages?
>
> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
> > Hi, folks.
> >
> > Some time ago, I've started to think about an idea to experiment with new
> > powerful Web APIs: a sort of "deceptive site" database for harmful uses
> of
> > browsers APIs. I've been curating that idea and come up with the concept
> of
> > a "revocation protocol" to revoke user granted permissions for origins
> > abusing those APIs.
> >
> > I published the idea on GitHub [1] and I was wondering about the utility
> > and feasibility of such a system so I would thank any feedback you want
> to
> > provide.
> >
> > I hope it will be of interest for you.
> >
> > [1] https://github.com/delapuente/revocation-protocol
> >
> > --
> > 
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-13 Thread Eric Rescorla
On Mon, Mar 13, 2017 at 12:22 AM, Frederik Braun  wrote:

> On 12.03.2017 04:08, Cameron Kaiser wrote:
> > On 3/10/17 4:38 AM, Masatoshi Kimura wrote:
> >> On 2017/03/10 6:53, Mike Connor wrote:
> >>> - Two-factor auth must be a requirement for all users approving or
> >>> pushing a change.
> >>
> >> I have no mobile devices. How can I use 2FA?
> >>
> >> Previously I was suggested to buy a new PC and SSD only to shorten the
> >> build time. Now do I have to buy a smartphone only to contribute
> >> Mozilla? What's the next?
> >
> > I actually use a Perl script to do HOTP. And there are things like
> > Firekey. No mobile device necessary.
>
> Doesn't that defeat the point of a second factor?
>

Not entirely, no, because it is not replayable.

-Ekr


>
> >
> > Cameron Kaiser
> > tier-3-2-1-contact!
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-12 Thread Eric Rescorla
On Fri, Mar 10, 2017 at 9:03 PM, L. David Baron <dba...@dbaron.org> wrote:

> On Friday 2017-03-10 19:33 -0800, Eric Rescorla wrote:
> > We have been using Phabricator for our reviews in NSS and its interdiffs
> > work pretty well
> > (modulo rebases, which are not so great), and it's very easy to handle
> LGTM
> > with
> > nits and verify the nits.
>
> For what it's worth, I think proper interdiffs have two columns of
> [ -+] at the beginning of each line, not one column like diffs do.
> I've gotten used to reading interdiffs as diff -u of a diff -u, and
> while it takes a little getting used to, but once you're used to it,
> it actually represents what an interdiff is and is quite usable.  I
> think anything that pretends that something like this:
>
>   // Frame has a LayerActivityProperty property
>   FRAME_STATE_BIT(Generic, 54, NS_FRAME_HAS_LAYER_ACTIVITY_PROPERTY)
>
> + // Frame owns anonymous boxes whose style contexts it will need to
> update during
> + // a stylo tree traversal.
> + FRAME_STATE_BIT(Generic, 55, NS_FRAME_OWNS_ANON_BOXES)
> +
>  +// If this bit is set, then reflow may be dispatched from the current
>  +// frame instead of the root frame.
> -+FRAME_STATE_BIT(Generic, 55, NS_FRAME_DYNAMIC_REFLOW_ROOT)
> ++FRAME_STATE_BIT(Generic, 56, NS_FRAME_DYNAMIC_REFLOW_ROOT)
>  +
>   // Set for all descendants of MathML sub/supscript elements (other than
> the
>   // base frame) to indicate that the SSTY font feature should be used.
>   FRAME_STATE_BIT(Generic, 58, NS_FRAME_MATHML_SCRIPT_DESCENDANT)
>
> can be represented with only one column of [+- ] at the beginning
> is going to fail for some substantial set of important cases (such
> as rebases, as you mention).
>

The systems I am most familiar with (Phabricator, Rietveld), present
interdiffs
by allowing you to look at the diff between any revision of the CL
(including
the base revision that's the code as-is). This works pretty well for
anything other
than rebases (and is actually rather equivalent to the UI you get with
Github when people update PRs by pushing new commits onto the branch).

What sort of UI you want for rebases seems like a bit more of an open
question. i can imagine several things:

1. For simple rebases (where there's not much interaction with the
surrounding
code, you probably just want to see the patch in context as if the rebase
hadn't
happened.
2. For more complicated rebases (where there is real interaction with the
code that was rebased onto), you want to see the difference between
base1 + CL1 and base2 + CL2, but with the diffs that are due to the
rebase distinguished from the ones that are due to the CL1-CL2
difference (this what Phabricator does by marking the rebase artifacts
as lighter).
3. The design you suggest (which, at least for me, seems like it's
harder to read than designs where the tools provide more support).

It seems like designs here are evolving and it's probably going to be a
matter of personal taste, at least for a while

-Ekr


> (That's a piece of interdiff from rebasing my own patch queue
> earlier this week.)
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-10 Thread Eric Rescorla
On Fri, Mar 10, 2017 at 7:23 PM, smaug via governance <
governa...@lists.mozilla.org> wrote:

> On 03/10/2017 12:59 AM, Bobby Holley wrote:
>
>> At a high level, I think the goals here are good.
>>
>> However, the tooling here needs to be top-notch for this to work, and the
>> standard approach of flipping on an MVP and dealing with the rest in
>> followup bugs isn't going to be acceptable for something so critical to
>> our
>> productivity as landing code. The only reason more developers aren't
>> complaining about the MozReview+autoland workflow right now is that it's
>> optional.
>>
>> The busiest and most-productive developers (ehsan, bz, dbaron, etc) tend
>> not to pay attention to new workflows because they have too much else on
>> their plate. The onus needs to be on the team deploying this to have the
>> highest-volume committers using the new system happily and voluntarily
>> before it becomes mandatory. That probably means that the team should not
>> have any deadline-oriented incentives to ship it before it's ready.
>>
>> Also, getting rid of "r+ with comments" is a non-starter.
>>
>
> FWIW, with my reviewer hat on, I think that is not true, _assuming_ there
> is a reliable interdiff for patches.
> And not only MozReview patches but for all the patches. (and MozReview
> interdiff isn't reliable, it has dataloss issues so it
> doesn't really count there.).
> I'd be ok to do a quick r+ if interdiff was working well.
>

Without taking a position on the broader proposal, I agree with this.

We have been using Phabricator for our reviews in NSS and its interdiffs
work pretty well
(modulo rebases, which are not so great), and it's very easy to handle LGTM
with
nits and verify the nits.

-Ekr


>
>
>
> -Olli
>
>
>> bholley
>>
>>
>> On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor  wrote:
>>
>> (please direct followups to dev-planning, cross-posting to governance,
>>> firefox-dev, dev-platform)
>>>
>>>
>>> Nearly 19 years after the creation of the Mozilla Project, commit access
>>> remains essentially the same as it has always been.  We've evolved the
>>> vouching process a number of times, CVS has long since been replaced by
>>> Mercurial & others, and we've taken some positive steps in terms of
>>> securing the commit process.  And yet we've never touched the core idea
>>> of
>>> granting developers direct commit access to our most important
>>> repositories.  After a large number of discussions since taking ownership
>>> over commit policy, I believe it is time for Mozilla to change that
>>> practice.
>>>
>>> Before I get into the meat of the current proposal, I would like to
>>> outline
>>> a set of key goals for any change we make.  These goals have been
>>> informed
>>> by a set of stakeholders from across the project including the
>>> engineering,
>>> security, release and QA teams.  It's inevitable that any significant
>>> change will disrupt longstanding workflows.  As a result, it is critical
>>> that we are all aligned on the goals of the change.
>>>
>>>
>>> I've identified the following goals as critical for a responsible commit
>>> access policy:
>>>
>>>
>>> - Compromising a single individual's credentials must not be
>>> sufficient
>>> to land malicious code into our products.
>>> - Two-factor auth must be a requirement for all users approving or
>>> pushing a change.
>>> - The change that gets pushed must be the same change that was
>>> approved.
>>> - Broken commits must be rejected automatically as a part of the
>>> commit
>>> process.
>>>
>>>
>>> In order to achieve these goals, I propose that we commit to making the
>>> following changes to all Firefox product repositories:
>>>
>>>
>>> - Direct commit access to repositories will be strictly limited to
>>> sheriffs and a subset of release engineering.
>>>- Any direct commits by these individuals will be limited to
>>> fixing
>>>bustage that automation misses and handling branch merges.
>>> - All other changes will go through an autoland-based workflow.
>>>- Developers commit to a staging repository, with scripting that
>>>connects the changeset to a Bugzilla attachment, and integrates
>>> with review
>>>flags.
>>>- Reviewers and any other approvers interact with the changeset as
>>>today (including ReviewBoard if preferred), with Bugzilla flags as
>>> the
>>>canonical source of truth.
>>>- Upon approval, the changeset will be pushed into autoland.
>>>- If the push is successful, the change is merged to
>>> mozilla-central,
>>>and the bug updated.
>>>
>>> I know this is a major change in practice from how we currently operate,
>>> and my ask is that we work together to understand the impact and
>>> concerns.
>>> If you find yourself disagreeing with the goals, let's have that
>>> discussion
>>> instead of arguing about the solution.  If you agree with the goals, but
>>> not the solution, I'd 

Re: The future of commit access policy for core Firefox

2017-03-09 Thread Eric Rescorla
On Thu, Mar 9, 2017 at 3:11 PM,  wrote:

> On Friday, March 10, 2017 at 10:53:50 AM UTC+13, Mike Connor wrote:
> > (please direct followups to dev-planning, cross-posting to governance,
> > firefox-dev, dev-platform)
> >
> >
> > Nearly 19 years after the creation of the Mozilla Project, commit access
> > remains essentially the same as it has always been.  We've evolved the
> > vouching process a number of times, CVS has long since been replaced by
> > Mercurial & others, and we've taken some positive steps in terms of
> > securing the commit process.  And yet we've never touched the core idea
> of
> > granting developers direct commit access to our most important
> > repositories.  After a large number of discussions since taking ownership
> > over commit policy, I believe it is time for Mozilla to change that
> > practice.
> >
> > Before I get into the meat of the current proposal, I would like to
> outline
> > a set of key goals for any change we make.  These goals have been
> informed
> > by a set of stakeholders from across the project including the
> engineering,
> > security, release and QA teams.  It's inevitable that any significant
> > change will disrupt longstanding workflows.  As a result, it is critical
> > that we are all aligned on the goals of the change.
> >
> >
> > I've identified the following goals as critical for a responsible commit
> > access policy:
> >
> >
> >- Compromising a single individual's credentials must not be
> sufficient
> >to land malicious code into our products.
> >- Two-factor auth must be a requirement for all users approving or
> >pushing a change.
> >- The change that gets pushed must be the same change that was
> approved.
> >- Broken commits must be rejected automatically as a part of the
> commit
> >process.
> >
> >
> > In order to achieve these goals, I propose that we commit to making the
> > following changes to all Firefox product repositories:
> >
> >
> >- Direct commit access to repositories will be strictly limited to
> >sheriffs and a subset of release engineering.
> >   - Any direct commits by these individuals will be limited to fixing
> >   bustage that automation misses and handling branch merges.
> >- All other changes will go through an autoland-based workflow.
> >   - Developers commit to a staging repository, with scripting that
> >   connects the changeset to a Bugzilla attachment, and integrates
> > with review
> >   flags.
> >   - Reviewers and any other approvers interact with the changeset as
> >   today (including ReviewBoard if preferred), with Bugzilla flags as
> the
> >   canonical source of truth.
> >   - Upon approval, the changeset will be pushed into autoland.
> >   - If the push is successful, the change is merged to
> mozilla-central,
> >   and the bug updated.
> >
> > I know this is a major change in practice from how we currently operate,
> > and my ask is that we work together to understand the impact and
> concerns.
> > If you find yourself disagreeing with the goals, let's have that
> discussion
> > instead of arguing about the solution.  If you agree with the goals, but
> > not the solution, I'd love to hear alternative ideas for how we can
> achieve
> > the outcomes outlined above.
> >
> > -- Mike
>
>
> How will uplifts work? Can only sheriffs land them?
>
> This would do-away with "r+ with comments addressed". Reviewers typically
> only say this for patches submitted by people they trust. So removing "r+
> with comments" would mean reviewers would need to re-review code, causing
> an extra delay and extra reviewing load. Is there some way we can keep the
> "r+ with comments addressed" behaviour for trusted contributors?
>

One potential approach would be to keep it for contributors who themselves
were L3
committers in the current framework (and met a similar bar in the future).

The way to rationalize this is that those people could always set up a sock
puppet
account, submit a patch, and then review it themselves and it would be
landed,
so the policy is ultimately "sign off by one L3 committer"

-Ekr


>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-09 Thread Eric Rescorla
On Thu, Mar 9, 2017 at 2:53 PM, Ben Kelly <bke...@mozilla.com> wrote:

>
>
> On Thu, Mar 9, 2017 at 5:48 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>>
>>
>> On Thu, Mar 9, 2017 at 2:43 PM, Ben Kelly <bke...@mozilla.com> wrote:
>>
>>> (Just continuing the thread here.)
>>>
>>> Personally I prefer looking at the bug for the full context and single
>>> point of truth.  Also, security bugs typically can't have extensive
>>> commit
>>> messages and moving a lot of context to commit messages might paint a
>>> target on security patches.
>>>
>>
>> Can't you determine that by just looking to see if the bug is visible?
>>
>
> So you are saying we should just write SECURE BUG REDACTED in these commit
> messages now?  Or do we have to fabricate a paragraph to match other
> commits now?
>

I'm not saying either of these things. What I am saying is that it's
trivial to determine
security bugs by checking to see if you can see them in Bugzilla. Do you
disagree
with this?


Right now our security bug process asks about the commit message and if it
> "paints a target" on the patch.  If you want to change our commit message
> policy, please adjust that or take it into account.
>
> And I also agree with the other commenters here that complexity should be
> described in code comments.
>

You are arguing with other people here, not me.

-Ekr


>
> Ultimately as long as the code is explained via comments, the bug is
> up-to-date, and our secure bug process isn't broken I don't have a strong
> opinion here.
>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-09 Thread Eric Rescorla
First, let me state that I am generally in support of this type of change.

More comments below.

On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor  wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been.  We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process.  And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories.  After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make.  These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams.  It's inevitable that any
> significant change will disrupt longstanding workflows.  As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>
>- Compromising a single individual's credentials must not be
>sufficient to land malicious code into our products.
>
> I think this is a good long-term goal, but I don't think it's one we need
to achieve now.
At the moment, I would settle for "only a few privileged people can
single-handedly
land malicious code into our products". In support of this, I would note
that what you
propose below is insufficient to achieve your stated objective, because
there will
still be single person admin access to machines.


>- Two-factor auth must be a requirement for all users approving or
>pushing a change.
>- The change that gets pushed must be the same change that was
>approved.
>- Broken commits must be rejected automatically as a part of the
>commit process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>
>- Direct commit access to repositories will be strictly limited to
>sheriffs and a subset of release engineering.
>   - Any direct commits by these individuals will be limited to fixing
>   bustage that automation misses and handling branch merges.
>
> I think this is a good eventual goal, but in the short term, I think it's
probably useful
to keep checkin-needed floating around, especially given the somewhat iffy
state
of the toolchain.


>
>- All other changes will go through an autoland-based workflow.
>   - Developers commit to a staging repository, with scripting that
>   connects the changeset to a Bugzilla attachment, and integrates with 
> review
>   flags.
>   - Reviewers and any other approvers interact with the changeset as
>   today (including ReviewBoard if preferred), with Bugzilla flags as the
>   canonical source of truth.
>   - Upon approval, the changeset will be pushed into autoland.
>
> Implicit in this is some sort of hierarchy of reviewers (tied to what was
previous L3 commit?)
that says who can review a patch. Otherwise, I can just create a dummy
account, r+ my own
patch, and Land Ho! IIRC Chromium does this by saying that LGTM implies
autolanding
only if the reviewer could have landed the code themselves.

-Ekr



>- If the push is successful, the change is merged to mozilla-central,
>   and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution.  If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>
> -- Mike
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-09 Thread Eric Rescorla
On Thu, Mar 9, 2017 at 2:43 PM, Ben Kelly  wrote:

> On Thu, Mar 9, 2017 at 5:35 PM, Mike Hommey  wrote:
>
> > On Thu, Mar 09, 2017 at 02:46:53PM -0500, Ehsan Akhgari wrote:
> > > I review a large number of patches on a typical day, and usually I have
> > to
> > > spend a fair amount of time to just understand what the patch is doing.
> > As
> > > the patch author, you can do a lot to help make this easier by *writing
> > > better commit messages*.  Starting now, I'm going to try out a new
> > practice
> > > for a while: I'm going to first review the commit message of all
> patches,
> > > and if I can't understand what the patch does by reading the commit
> > message
> > > before reading any of the code, I'll r- and ask for another version of
> > the
> > > patch.
> >
> > Sometimes, the commit message does explain what it does in a sufficient
> > manner, but finding out why requires reading the bug, assuming it's
> > written there. I think this information should also be in the commit
> > message.
>
>
> (Just continuing the thread here.)
>
> Personally I prefer looking at the bug for the full context and single
> point of truth.  Also, security bugs typically can't have extensive commit
> messages and moving a lot of context to commit messages might paint a
> target on security patches.
>

Can't you determine that by just looking to see if the bug is visible?

-Ekr


> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-09 Thread Eric Rescorla
I'm in favor of good commit messages, but I would note that current m-c
convention really pushes against this, because people seem to feel that
commit messages should be one line. Not sure what to do about that,
but thought I would mention it.

-Ekr


On Thu, Mar 9, 2017 at 12:10 PM, Boris Zbarsky  wrote:

> On 3/9/17 2:46 PM, Ehsan Akhgari wrote:
>
>> Starting now, I'm going to try out a new practice
>> for a while: I'm going to first review the commit message of all patches,
>> and if I can't understand what the patch does by reading the commit
>> message
>> before reading any of the code, I'll r- and ask for another version of the
>> patch.
>>
>
> I will be doing likewise.
>
> I believe dbaron does this already.
>
> I encourage others to do this too.
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-17 Thread Eric Rescorla
On Thu, Feb 16, 2017 at 11:39 PM, David Major  wrote:

> One thing I like about trailing operators is that they tend to match
> what you'd find in bullet-point prose. Here's a made-up example:
>
> You can apply for a refund of your travel insurance policy if:
> * You cancel within 7 days of purchase, and
> * You have not yet begun your journey, and
> * You have not used any benefits of the plan.
>
> Over time my eyes have come to expect the conjunction on the right.
>

I tend to agree with this, though it's just opinion.

In any case, absent some pretty compelling reason, I don't see a good
reason to change the style here, given that we clearly don't have consensus
to do so.

-Ekr


> On Fri, Feb 17, 2017, at 07:28 PM, gsquel...@mozilla.com wrote:
> > While it's good to know how many people are for or against it, so that we
> > get a sense of where the majority swings, I'd really like to know *why*
> > people have their position. (I could learn something!)
> >
> > So Nick, would you have some reasons for your "strong preference"? And
> > what do you think of the opposite rationale as found in [2]?
> >
> > (But I realize it's more work, so no troubles if you don't have the time
> > to expand on your position here thank you for your feedback so far.)
> >
> > On Friday, February 17, 2017 at 5:16:42 PM UTC+11, Nicholas Nethercote
> > wrote:
> > > I personally have a strong preference for operators at the end of
> lines.
> > > The codebase seems to agree with me, judging by some rough grepping
> ('fff'
> > > is an alias I have that's roughly equivalent to rgrep):
> > >
> > > $ fff "&&$" | wc -l
> > >   28907
> > > $ fff "^ *&&" | wc -l
> > >3751
> > >
> > > $ fff "||$" | wc -l
> > >   26429
> > > $ fff "^ *||" | wc -l
> > >2977
> > >
> > > $ fff " =$" | wc -l
> > >   39379
> > > $ fff "^ *= " | wc -l
> > > 629
> > >
> > > $ fff " +$" | wc -l
> > >   31909
> > > $ fff "^ *+$" | wc -l
> > > 491
> > >
> > > $ fff " -$" | wc -l
> > >2083
> > > $ fff "^ *-$" | wc -l
> > >  52
> > >
> > > $ fff " ==$" | wc -l
> > >   1501
> > > $ fff "^ *== " | wc -l
> > >   161
> > >
> > > $ fff " !=$" | wc -l
> > >   699
> > > $ fff "^ *!= " | wc -l
> > >   129
> > >
> > > They are rough regexps and probably have some false positives, but the
> > > numbers aren't even close; operators at the end of the line clearly
> > > dominate.
> > >
> > > I will conform for the greater good but silently weep inside every
> time I
> > > see it.
> > >
> > > Nick
> > >
> > > On Fri, Feb 17, 2017 at 8:47 AM,  wrote:
> > >
> > > > Question of the day:
> > > > When breaking overlong expressions, should &&/|| go at the end or the
> > > > beginning of the line?
> > > >
> > > > TL;DR: Coding style says 'end', I think we should change it to
> > > > 'beginning' for better clarity, and consistency with other operators.
> > > >
> > > >
> > > > Our coding style reads:
> > > > "Break long conditions after && and || logical connectives. See
> below for
> > > > the rule for other operators." [1]
> > > > """
> > > > Overlong expressions not joined by && and || should break so the
> operator
> > > > starts on the second line and starts in the same column as the start
> of the
> > > > expression in the first line. This applies to ?:, binary arithmetic
> > > > operators including +, and member-of operators (in particular the .
> > > > operator in JavaScript, see the Rationale).
> > > >
> > > > Rationale: operator at the front of the continuation line makes for
> faster
> > > > visual scanning, because there is no need to read to end of line.
> Also
> > > > there exists a context-sensitive keyword hazard in JavaScript; see
> bug
> > > > 442099, comment 19, which can be avoided by putting . at the start
> of a
> > > > continuation line in long member expression.
> > > > """ [2]
> > > >
> > > >
> > > > I initially focused on the rationale, so I thought *all* operators
> should
> > > > go at the front of the line.
> > > >
> > > > But it seems I've been living a lie!
> > > > &&/|| should apparently be at the end, while other operators (in some
> > > > situations) should be at the beginning.
> > > >
> > > >
> > > > Now I personally think this just doesn't make sense:
> > > > - Why the distinction between &&/|| and other operators?
> > > > - Why would the excellent rationale not apply to &&/||?
> > > > - Pedantically, the style talks about 'expression *not* joined by
> &&/||,
> > > > but what about expression that *are* joined by &&/||? (Undefined
> Behavior!)
> > > >
> > > > Based on that, I believe &&/|| should be made consistent with *all*
> > > > operators, and go at the beginning of lines, aligned with the first
> operand
> > > > above.
> > > >
> > > > And therefore I would propose the following changes to the coding
> style:
> > > > - Remove the lonely &&/|| sentence at [1].
> > > > - Rephrase the first sentence at [2] to something like: "Overlong
> > > > expressions should break so that the operator 

Re: Redirecting http://hg.mozilla.org/ to https://

2017-01-26 Thread Eric Rescorla
Yes. Kill it with fire!

-Ekr


On Fri, Jan 27, 2017 at 7:17 AM, Gregory Szorc  wrote:

> It may be surprising, but hg.mozilla.org is still accepting plain text
> connections via http://hg.mozilla.org/ and isn't redirecting them to
> https://hg.mozilla.org/.
>
> On February 1 likely around 0800 PST, all requests to
> http://hg.mozilla.org/ will issue an HTTP 301 Moved Permanently redirect
> to https://hg.mozilla.org/.
>
> If anything breaks as a result of this change, the general opinion is it
> deserves to break because it isn't using secure communications and is
> possibly a security vulnerability. Therefore, unless this change causes
> widespread carnage, it is unlikely to be rolled back.
>
> Please note that a lot of 3rd parties query random content on
> hg.mozilla.org. For example, Curl's widespread mk-ca-bundle.pl script for
> bootstrapping the trusted CA bundle queried http://hg.mozilla.org/ until
> recently [1]. So it is likely this change may break random things outside
> of Mozilla. Again, anything not using https://hg.mozilla.org/ should
> probably be treated as a security vulnerability and fixed ASAP.
>
> For legacy clients only supporting TLS 1.0 (this includes Python 2.6 and
> /usr/bin/python on all versions of OS X - see [2]), hg.mozilla.org still
> supports [marginally secure compared to TLS 1.1+] TLS 1.0 connections and
> will continue to do so for the foreseeable future.
>
> This change is tracked in bug 450645. Please subscribe to stay in the loop
> regarding future changes, such as removing support for TLS 1.0 and not
> accepting plain text http://hg.mozilla.org/ connections at all.
>
> Please send comments to bug 450645 or reply to dev-version-control@lists.
> mozilla.org.
>
> [1] https://github.com/curl/curl/commit/1ad2bdcf110266c33eea70b895cb8c
> 150eeac790
> [2] https://github.com/Homebrew/homebrew-core/issues/3541
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about MOZ_MUST_USE and [must_use]

2017-01-19 Thread Eric Rescorla
What would actually be very helpful would be a way to selectively turn on
checking of
*all* return values in a given file/subdirectory. Is there some
mechanism/plan for that?

Thanks,
-Ekr


On Thu, Jan 19, 2017 at 2:09 PM, Nicholas Nethercote  wrote:

> Hi,
>
> We have two annotations that can be used to prevent missing return value
> checks:
>
> - MOZ_MUST_USE for C++ functions where the return type indicates
> success/failure, e.g. nsresult, bool (in some instances), and some other
> types.
>
> - [must_use] for IDL methods and properties where the nsresult value should
> be checked.
>
> We have *many* functions/methods/properties for which these annotations are
> appropriate, and *many* missing return value checks. Unfortunately, trying
> to fix these proactively is a frustrating and thankless task, because it's
> difficult to know in advance which missing checks are likely to cause
> problems in advance, and adding missing checks is not always
> straightforward.
>
> However, if you do see clearly buggy behaviour (e.g. a crash) caused by a
> missing return value, please take the opportunity to retroactively add the
> annotation(s) in that case!
> https://bugzilla.mozilla.org/show_bug.cgi?id=1331619 is a good example of
> such a bug, and https://bugzilla.mozilla.org/show_bug.cgi?id=1332453 is
> the
> follow-up to add the annotations.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement and Ship: Large-Allocation Header

2017-01-18 Thread Eric Rescorla
On Wed, Jan 18, 2017 at 12:21 PM, Michael Layzell 
wrote:

> Summary:
> Games implemented on the web platform using WASM or asm.js use large
> contiguous blocks of allocated memory as their backing store for game
> memory. For complex games, these allocations can be quite large, sometimes
> as large as 1GB. In 64-bit builds, we have no problem finding a large
> enough section of virtual memory that we can allocate a large contiguous
> hunk. Unfortunately normal use of Firefox quickly fragments the smaller
> address space of 32-bit Firefox, meaning that these complex games often
> fail to run.
>
> The Large-Allocation header is a hint header for a document. It tells the
> browser that the web content in the to-be-loaded page is going to want to
> perform a large contiguous memory allocation. In our current implementation
> of this feature, we handle this hint by starting a dedicated process for
> the to-be-loaded document, and loading the document inside of that process
> instead. Further top-level navigations in that "Fresh Process" will cause
> the process to be discarded, and the browsing context will shift back to
> the primary content process.
>
> We hope to ship this header alongside WASM in Firefox 52 to enable game
> developers to develop more intense games on our web platform. More details
> on the implementation and limitations of this header and our implementation
> can be found in the linked bug.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1331083
>
> Link to standard: This feature is non-standard
>

Hmm... Do you intend to publish this with IANA using the procedures
specified in:
https://tools.ietf.org/html/rfc3864


DevTools bug: none
>
> Do other browser engines implement this?
> No, we are in conversations with Chrome about them potentially also
> recognizing this hint.
>
> Tests: Added as part of the implementation
>
> Security & Privacy Concerns: none
>

I share MT's concerns here. It seems like minimally there are DoS concerns
here, but
also potentially concerns about being able to force other Large Allocation
requesters
onto the same process (to the extent to which being on other processes is
valuable).

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Verifiable Claims Working Group

2016-12-28 Thread Eric Rescorla
On Wed, Dec 28, 2016 at 3:52 PM, L. David Baron  wrote:

> Here's an attempt to write up comments to submit on this charter.
> I'm not sure I understood ekr's reply to mt, though.  So corrections
> and clarifications are certainly welcome.
>
> Sorry for the delay circling back to this.
>
> -David
>
> We don't think the W3C should be putting resources behind
> standardization of verifiable claims.  We're not convinced of either
> sufficient demand for this or sufficient incubation of the technology.
>
> However, based on the proposed architecture at
> https://w3c.github.io/webpayments-ig/VCTF/architecture/ ,
> linked from the charter, we're very concerned about the privacy
> properties of this work if the W3C were to proceed with it.
>
> This architecture appears to propose a system in which verification of
> claims leaks substantial information about a user.  For example,
> presenting a credential that is tied to an identity of a user allows for
> tracking of that identity across sites, which the user may not want.  Or
> if, for example, a site accepts claims from various government
> authorities for proof of a user's age, then presentation of a claim of
> age from the California DMV would provide the data that the user lives
> in California, even if that was not the information requested or needed.
>

This seems correct.

I would add:
Even if claims are not directly tied to identity, it appears that the
proposed
architecture would allow the Issuer and the Inspector to collude to
determine
which Holder a claim applies to.


> There has been substantial work on using cryptography to allow proof of
> specific claims without leaking information, such as
> https://www.microsoft.com/en-us/research/project/u-prove/ .  However,
> this effort seems to ignore that work and instead propose a design with
> much worse privacy properties.
>
> If the W3C were to pursue this work, we think it would be best to pursue
> a system with strong privacy properties such as this one.  However, if
> that is not done, we would be particularly opposed to a system that ties
> claims to a single identity for the user, which would be most prone to
> unsanctioned tracking.  However, even transitory and pseudonomous
> identifiers can leak substantial information, contrary to the
> expectations of the user (in the proposed architecture, the Holder),
> particularly if some or all of the Issuer, Identifier Registry, and
> Inspector cooperate to track the Holder.
>

Yes, this seems good.

-Ekr


>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-23 Thread Eric Rescorla
On Thu, Dec 22, 2016 at 10:31 PM, <mcace...@mozilla.com> wrote:

> On Wednesday, December 21, 2016 at 12:51:10 AM UTC+11, Eric Rescorla wrote:
> > I'm not really following this argument. Usually when a document has been
> > floating
> > around a long time but clearly has basic design issues and can't get
> > consensus,
> > even when a major vendor has implemented it, that's a sign that it
> > *shouldn't*
> > be standardized until those issues are resolved. That's not standards
> > fatigue,
> > it's the process working as designed.
>
> The API addresses the use cases, but people here see those use cases as
> too basic because they don't represent average users (e.g., Boris' somewhat
> esoteric network setup). Most people have wifi at home, which is somewhat
> unmetered - and access to mobile data, which often costs more (but not
> always true).
>
> The API, though ".type", allows the user and app to have a conversation
> about that: "you want me to download stuff over mobile? Its might cost ya,
> but if you are ok with it...".
>

I don't really think this addresses my argument, which is not about any of
the technical details, but is rather about whether we should adopt
something that's clearly not very good -- which it seems clear you are
conceding here -- just because it's been floating around a long time and
people are tired.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2016-12-20 Thread Eric Rescorla
On Tue, Dec 20, 2016 at 10:28 AM, Cody Wohlers 
wrote:

> Absolutely!  Let's Encrypt sounds awesome, super-easy, and the price is
> right.
>
> But I'm thinking of cases like Lavabit where a judge forced the site
> operator to release the private key.  Or the opposite - could a government
> restrict access to a site by forcing the CA to revoke certificates?  I
> guess you could just get another certificate from another CA but what if
> they are all ordered to revoke you - like in some future world government
> or something...
>

Certainly a government could do that, but it's easier to just go after the
DNS.


This example is extreme but security is not about the norm, it's about the
> fringe cases.  I just wish we could have an encryption scheme that doesn't
> need any third-party authority, before we start punishing those who don't
> use it.  That's all.
>

As long as sites are identified by domain names and want those names to be
tied to real world identities, I don't see anything like that one the
horizon (i.e., I'm not aware of any technology which would let you do it).

-Ekr



> On Tuesday, 20 December 2016 10:47:33 UTC-7, Jim Blandy  wrote:
> > Can't people use Let's Encrypt to obtain a certificate for free without
> the
> > usual CA run-around?
> >
> > https://letsencrypt.org/getting-started/
> >
> > "Let’s Encrypt is a free, automated, and open certificate authority
> brought
> > to you by the non-profit Internet Security Research Group (ISRG)."
> >
> >
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-20 Thread Eric Rescorla
On Mon, Dec 19, 2016 at 10:58 PM,  wrote:

> On Friday, December 16, 2016 at 8:33:48 AM UTC+11, Tantek Çelik wrote:
> > On Thu, Dec 15, 2016 at 11:51 AM, Boris Zbarsky <> wrote:
> > > On 12/15/16 12:20 PM, Ben Kelly wrote:
> > >>
> > >> Its more information than nothing.
> > >
> > >
> > > I'm not sure it is.  At least when you have nothing you _know_ you have
> > > nothing, so might think about other ways to find out what you want to
> know.
> > > This way you think you know something but you don't.
> >
> > Agreed with Boris. "more information than nothing" is not an absolute
> > value, when that information is deceiving, which as others have
> > pointed out in this thread, is quite likely to occur with non-trivial
> > frequency in common uses of this API (the "if bandwidth>x then slow
> > download" example proves this point).
> >
> > E.g. a high % of the time, (most of the time when I'm not at home or
> > work), I am on a 4G (high bandwidth) mifi (metered cost).
> >
> > This API would report misleading results for me 100% of the time I am
> > on my mifi, and for anyone else on a 4G mifi.
>
> But you know you are on a mifi as a user: you bought the mifi, you paid
> for the mifi's contract, you connected to the mifi. Same with hotel wifi,
> etc. which may be metered.
>
> The point of the API is to allow the end-user and the application to
> negotiate when it's best to perform a download (not to make decisions about
> what is best and what is going to cost money). There is nothing preventing
> an app from asking the user what network type would be best to perform
> synchronization over.
>
> The general assumption that WIFI is cheap and 3G/4G may be sometimes
> wrong, but it holds for most users.
>
> > Real experience, all (AFAIK) the "sync to cloud automatically" code
> > out there makes this mistake, e.g. iCloud, DropBox etc., so I've had
> > to turn all of it off or just not use it.
>
> Sure, but that goes back to Ehsan's point about perfect information: we
> can't currently get that until we get better WIFI standards or whatever.
> Until then, your mifi will look like WIFI - but that's not the APIs fault.
>
> Again, see the use cases document.
>
> > Let's learn from the error of "native" implementations/uses of this
> > kind of API / use thereof and not repeat that mistake on web,
> > certainly not ship an API that misleads web developers into doing the
> > wrong thing.
>
> The use cases document shows that native apps get this right a lot of the
> time.
>
> We are weighting the iCloud/DropBox problem against all the app examples
> given in the document. Right now, sites use a bunch of hacks to figure out
> if you are on a metered connection or not (see BBC example in the document).
>
> > >> Bluetooth networking is also a thing.
> > >
> > >
> > > That's a good point.
> > >
> > >> I think being able to distinguish this stuff provides some value even
> if
> > >> its not perfect for all cases.  And I don't see how it causes any
> harm.
> > >
> > >
> > > I think it causes harm to give people information they have no business
> > > having ("wifi" vs "ethernet") and it does harm to given them
> information
> > > that's likely to be bogus (the last hop speed in the wifi/ethernet
> cases).
> >
> > Precisely. Obvious harms:
> >
> > 1. Privacy compromise without obvious user benefit
> > 2. Causes web apps to waste user bandwidth/financial resources
> >
> > If the response to that is "but doing it right is too hard", then
> > don't do it all.
> >
> > > Maybe the answer is that we should just reconsider the set of types
> that
> > > gets exposed and how they get mapped to connection speeds
> >
> > I'm not sure that would sufficiently address harm 2.
> >
> > As far as I can tell, the only useful bit of information (as bz
> > pointed out) is the
> >
> > Am I on a cell data connection "for sure or maybe not"?
> > a) Where cell data "for sure" -> will *almost certainly cost the user*
> > b) Whereas "or maybe not" -> you have no idea whether it will cost the
> > user or not, do not make any assumptions.
> >
> > Given that the one pseudo-code example provided earlier in this thread
> > makes the mistake of using case (b) to errantly initiate bandwidth/$
> > wasting downloads (which may not even be necessary), I think this API
> > has not been well thought through in terms of actual user benefit, and
> > needs further incubation.
>
> Yeah, that's why it's currently in the WICG.
>
> > Not to mention we shouldn't even be getting to an "Intent to *ship*"
> > on something we expect to standardize that hasn't even been published
> > as a FPWD yet (which only *starts* the count-down clock to IPR
> > commitment).
>
> It was originally part of DAP, so it's actually gone through years of
> publication (first published in mid 2011):
> https://www.w3.org/TR/2011/WD-netinfo-api-20110607/
>
> All the arguments presented here also got raised by the WG, which made it
> go nowhere... so we took it to the WICG for 

Re: Intent to ship: NetworkInformation

2016-12-15 Thread Eric Rescorla
It seems pretty premature in this process to trying to hack around the API
not expressing what we wanted to make. If what we want to express is "is
this link free" then let's make the API say that.

-Ekr


On Thu, Dec 15, 2016 at 4:17 PM, Martin Thomson  wrote:

> On Fri, Dec 16, 2016 at 10:28 AM, Boris Zbarsky  wrote:
> > 2)  Figure out a way to map the one bit of information we actually want
> to
> > expose into some sort of values that look like the existing API. Change
> the
> > spec as needed to allow tweaks we want to make here (e.g. to allow having
> > the max speed thing not be defined or always be 0 or always +Infinity, or
> > always NaN, or always 42 or something).
>
> Patrick suggested that we send a fixed downlink (+Infinity is always
> correct by the spec) always and use wifi/cellular.  I assume that we
> need to pick one of those in the case that we can't/don't know, but I
> like that plan.
>
> We could create a new API as well, but I'm not sure what it would *do*.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-15 Thread Eric Rescorla
On Thu, Dec 15, 2016 at 6:42 AM, Boris Zbarsky  wrote:

> On 12/15/16 3:28 AM, Andrea Marchesini wrote:
>
>> Spec: https://w3c.github.io/netinfo/
>>
>
> Is there any plan to have this turned into an actual spec, complete with
> IPR commitments, testcases, wider review, etc?
>
> Have we done a privacy review of this spec?  Why should a webpage ever
> know whether I'm connected via "ethernet" vs "wifi" (insofar as we have any
> way to determine this at all; I could be connected via "ethernet" to a
> bridge that is connected via "wifi")?
>

I'm also concerned that this spec does not seem to take into account
multipath or multihoming, both of which seem relevant here. Say that I have
a device with both a cellular and WiFi link and I attempt to use both of
them in some fashion (depending on the remote IP address), what should I be
reporting for Network Connection?

-Ekr




Looking at the use cases document at  etinfo-usecases/>, it seems like people generally care more about things
> like "bandwidth costs money" and "how much bandwidth do we expect?" than
> about the actual physical transport, no?
>
> -Boris
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Verifiable Claims Working Group

2016-12-12 Thread Eric Rescorla
On Mon, Dec 12, 2016 at 6:37 PM, Martin Thomson <m...@mozilla.com> wrote:

> On Tue, Dec 13, 2016 at 5:56 AM, Eric Rescorla <e...@rtfm.com> wrote:
> > Following up to myself: if the plan is really that people will have a
> > single identity that they then present to everyone and that claims hang
> > off, I think W3C should not standardize that.
>
> A lot hinges on the nature of that identifier, but couldn't it be a
> pseudonymous identifier, even unique to the specific transaction?
>

That's not enough, because what you need is blind signatures over the
claims. Otherwise, the issuer can tell who you are authenticating to. It
seems pretty clear that the inspector gets the identifier (see fig 5 here
https://w3c.github.io/webpayments-ig/VCTF/architecture/) and so I don't see
how this isn't linkable



Building a set of issuers that sites are willing to trust creates a
> whole new set of problems.  Say that I accept claims from the
> California DMV for the purposes of age.  When you produce a document
> signed by the DMV saying that you are 21, I also learn that (with high
> probability) you live in California.
>
> If which entities are trusted, that has another set of consequences.
> What consequences on whether the relying party does or is able to
> advertise which entities it trusts.
>
> All of this stuff matters at the scale of the web.
>

Yes, all this too

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Verifiable Claims Working Group

2016-12-12 Thread Eric Rescorla
Following up to myself: if the plan is really that people will have a
single identity that they then present to everyone and that claims hang
off, I think W3C should not standardize that.

-Ekr


On Mon, Dec 12, 2016 at 8:48 AM, Eric Rescorla <e...@rtfm.com> wrote:

> I took a quick look at this material and it's very hard to tell what the
> actual privacy properties are:
>
> "From a privacy perspective it is important that information that is
> intended to remain private is handled appropriately. Maintaining the trust
> of a verifiable claims ecosystem is important. Verifiable claims technology
> defined by this group should not disclose private details of the
> participants' identity or other sensitive information unless required for
> operational purposes, by legal or jurisdictional rules, or when
> deliberately consented to (e.g. as part of a request for information) by
> the holder of the information. The design of any data model and syntax(es)
> should guard against the unwanted leakage of such data."
>
> But then when I read their architecture, I see:
> "In order for Jane (Holder and Subject) to have information assigned to
> her, she must get an identifier (Subject Identifier)."
>
> Which makes it sound like this is going to leak a huge amount of tracking
> information (effectively being an identity credential with attributes
> attached). There has been a huge amount of work on using crypto to allow
> you to prove specific claims without information leakage (cf.
> https://www.microsoft.com/en-us/research/project/u-prove/), but this
> doesn't seem to reflect any of it, rather opting for a much  more naive
> design which is going to have much worse privacy properties. Is that really
> the intent here?
>
> -Ekr
>
>
>
> On Fri, Dec 9, 2016 at 6:17 PM, L. David Baron <dba...@dbaron.org> wrote:
>
>> The W3C is proposing a new charter for:
>>
>>   Verifiable Claims Working Group
>>   https://www.w3.org/2017/vc/charter
>>   https://lists.w3.org/Archives/Public/public-new-work/2016Dec/0003.html
>>
>> Mozilla has the opportunity to send comments or objections through
>> Sunday, January 15, 2017.
>>
>> Please reply to this thread if you think there's something we should
>> say as part of this charter review, or if you think we should
>> support or oppose it.
>>
>> -David
>>
>> --
>> 턞   L. David Baron http://dbaron.org/   턂
>> 턢   Mozilla  https://www.mozilla.org/   턂
>>  Before I built a wall I'd ask to know
>>  What I was walling in or walling out,
>>  And to whom I was like to give offense.
>>- Robert Frost, Mending Wall (1914)
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Verifiable Claims Working Group

2016-12-12 Thread Eric Rescorla
I took a quick look at this material and it's very hard to tell what the
actual privacy properties are:

"From a privacy perspective it is important that information that is
intended to remain private is handled appropriately. Maintaining the trust
of a verifiable claims ecosystem is important. Verifiable claims technology
defined by this group should not disclose private details of the
participants' identity or other sensitive information unless required for
operational purposes, by legal or jurisdictional rules, or when
deliberately consented to (e.g. as part of a request for information) by
the holder of the information. The design of any data model and syntax(es)
should guard against the unwanted leakage of such data."

But then when I read their architecture, I see:
"In order for Jane (Holder and Subject) to have information assigned to
her, she must get an identifier (Subject Identifier)."

Which makes it sound like this is going to leak a huge amount of tracking
information (effectively being an identity credential with attributes
attached). There has been a huge amount of work on using crypto to allow
you to prove specific claims without information leakage (cf.
https://www.microsoft.com/en-us/research/project/u-prove/), but this
doesn't seem to reflect any of it, rather opting for a much  more naive
design which is going to have much worse privacy properties. Is that really
the intent here?

-Ekr



On Fri, Dec 9, 2016 at 6:17 PM, L. David Baron  wrote:

> The W3C is proposing a new charter for:
>
>   Verifiable Claims Working Group
>   https://www.w3.org/2017/vc/charter
>   https://lists.w3.org/Archives/Public/public-new-work/2016Dec/0003.html
>
> Mozilla has the opportunity to send comments or objections through
> Sunday, January 15, 2017.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Automotive Working Group

2016-11-04 Thread Eric Rescorla
LGTM

On Fri, Nov 4, 2016 at 5:22 PM, L. David Baron  wrote:

> OK, here's a reformulation that takes a somewhat stronger position
> (mainly by checking the other box, and adding the paragraph at the
> end).
>
> -David
>
>
>  [X] opposes this Charter and requests that this group not be
>  created [Formal Objection] (your details below).
>
> We're concerned enough about the security and privacy aspects of
> this charter and the associated work that we believe this effort is
> not currently ready to begin development on the Recommendation
> track.
>
> We have a number of concerns about the security aspects of this
> work.  It's not clear how exposing vehicle information through
> WebSockets will work in a secure way.  Will connections to parts of
> the car be exposed to the Internet?  If not, how will access be
> limited to allowed clients?  How will integration with the DNS-based
> CA system and with the same origin policy be handled?  The proposals
> to use fixed hostnames don't appear workable, since they don't
> establish unique identities for which certificates can be issued.
> Similarly, it's not clear how the V2X server described verifies that
> the connection it receives is from a vehicle with the VIN that the
> client claims to have.  Security is critical, as security
> vulnerabilities in systems within cars have already led to serious
> safety problems; see http://www.autosec.org/publications.html .
>
> It seems that privacy needs to be a core aspect of this working
> group, given the level of private data involved in this space, and
> given deeper consideration from the beginning than a note that the
> working group will secure reviews from the Privacy Interest Group.
>
> It's also not OK to use a new GTLD (as this proposes using wwwivi);
> see https://tools.ietf.org/html/rfc6761 .
>
> These concerns are apparent after only a brief review.  Given that,
> we believe that the best path forward in this area is for the
> community to take some time to consider security and privacy more
> carefully, and come back later with a charter that reflects that
> consideration.
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Automotive Working Group

2016-11-04 Thread Eric Rescorla
On Fri, Nov 4, 2016 at 12:09 AM, L. David Baron <dba...@dbaron.org> wrote:

>
> So, first, it's not clear to me which option to check in the review.
> I think the basis of these comments is somewhere between:
>
>  [X] suggests changes to this Charter, and only supports the
>  proposal if the changes are adopted [Formal Objection] (your
>  details below).
>
> and:
>
>  [ ] opposes this Charter and requests that this group not be
>  created [Formal Objection] (your details below).
>
> I'm inclined to check the first one given that I think our objection
> is not about whether the group should be created at all, but whether
> it's ready to be created right now given the current charter and
> preparation.
>

It's not clear to me that there are any changes one could plausibly adopt
that would make this acceptable to us. The complaints MT and I had are
just the ones of first impression.

-Ekr


We're concerned enough about the security and privacy aspects of
> this charter and the associated work that we believe this effort is
> not currently ready to begin development on the Recommendation
> track.
>
> We have a number of concerns about the security aspects of this
> work.  It's not clear how exposing vehicle information through
> WebSockets will work in a secure way.  Will connections to parts of
> the car be exposed to the Internet?  If not, how will access be
> limited to allowed clients?  How will integration with the DNS-based
> CA system and with the same origin policy be handled?  The proposals
> to use fixed hostnames don't appear workable, since they don't
> establish unique identities for which certificates can be issued.
> Similarly, it's not clear how the V2X server described verifies that
> the connection it receives is from a vehicle with the VIN that the
> client claims to have.  Security is critical, as security
> vulnerabilities in systems within cars have already led to serious
> safety problems; see http://www.autosec.org/publications.html .
>
> It seems that privacy needs to be a core aspect of this working
> group, given the level of private data involved in this space, and
> given deeper consideration from the beginning than a note that the
> working group will secure reviews from the Privacy Interest Group.
>
> It's also not OK to use a new GTLD (as this proposes using wwwivi);
> see https://tools.ietf.org/html/rfc6761 .
>
>
> On Friday 2016-10-21 15:16 -0700, Tantek Çelik wrote:
> > Ekr,
> >
> > This sounds to me like there are sufficient reasons to formally object
> > to this charter, and as Martin points out, a special case of IoT/WoT
> > (with additional concerns!).
> >
> > David,
> >
> > Thus I too think we should formally object, link to our previous
> > formal objection of the WoT charter (since nearly all the same reasons
> > apply), and list the new items that Martin and Ekr provided. I suggest
> > we cc this response to www-archive as well.
> >
> > Thanks,
> >
> > Tantek
> >
> >
> >
> > On Tue, Oct 18, 2016 at 3:10 AM, Eric Rescorla <e...@rtfm.com> wrote:
> > > I share Martin's concerns here...
> > >
> > > There's fairly extensive evidence of security vulnerabilities in
> > > vehicular systems that can lead to serious safety issues (see:
> > > http://www.autosec.org/publications.html), so more than usual
> > > attention needs to be paid to security in this context.
> > >
> > > In fairness, a lot of these are implementation security issues:
> > > i.e., how to properly firewall off any network access from the
> > > CAN bus. You really need to ensure that there's no way
> > > to influence the CAN bus, which probably means some kind of
> > > very strong one-way communications guarantee. At some level
> > > these are out of scope for this group, but it's predictable
> > > that if this technology is built, people will also implement
> > > it in insecure ways, so in that respect it's very much in-scope.
> > >
> > > The commuications security story also seems to be not well
> > > fleshed out. The examples shown all seem to have fixed hostnames
> > > (wwwivi/localhost) which don't really seem like the basis for
> > > strong identity. It's not just enough to say, as in S 6, that
> > > the server has to have a certificate; what are the properties of
> > > that certificate? What identity does it have?
> > >
> > > This is particularly concerning:
> > >
> > > At this point, internet based clients and servers do not know the
> > > dynamic IP address that was assigne

Re: Removing navigator.buildID?

2016-11-01 Thread Eric Rescorla
On Tue, Nov 1, 2016 at 3:53 PM, Martin Thomson  wrote:

> On Tue, Nov 1, 2016 at 11:15 PM, Aryeh Gregor  wrote:
> > Taking a step back: is fingerprinting really a solvable problem in
> > practice?  At this point, are there a significant fraction of users
> > who can't be fingerprinted pretty reliably?  Inevitably, the more
> > features we add to the platform, the more fingerprinting surface we'll
> > expose.  At a certain point, we might be taking away a lot of features
> > for the sake of trying to stop the unstoppable.  (I have no idea if
> > this is actually true now, though.  This is a genuine question.  :) )
>
> https://www.w3.org/2001/tag/doc/unsanctioned-tracking/
>
> The conclusion: it's probably a lost cause, but we still shouldn't be
> building more of these.
>

I'm not sure I agree with this characterization of the link above. Here's
the relevant text:

"Believes that, because combatting fingerprinting is difficult, new Web
specifications should take reasonable measures to avoid adding unneeded
fingerprinting surface area. However, added surface area should not be a
primary factor in determining whether to add a new feature."


The more principled position is that we shouldn't have to rely on UA
> sniffing (which this is) to determine what a browser can and cannot
> do.  That there are bugs is the main reason we have these things.
>

I agree people shouldn't  be using UA sniffing for this purpose, but it's
very useful for determining what the heck is going on when there are
inevitably bugs.


Fixing the buildID to the major version (52) plus the channel
> (Nightly) would be the simplest fix without adding lots of extra
> complexity.
>

Maybe. The nightly population is pretty small and inherently has less
privacy, and release tends to be fairly homogenous. I'd like to see
some real analysis of the privacy impact before removing a useful
diagnostic surface.

-Ekr

___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: TLS 1.3 draft

2016-10-28 Thread Eric Rescorla
Yes.

On Sat, Oct 29, 2016 at 4:40 AM,  wrote:

> On Wednesday, October 19, 2016 at 4:49:43 PM UTC-7, Martin Thomson wrote:
> >
> > As of Firefox 52 I intend to turn TLS 1.3 on by default. ...
> >
> > ...
> >
> > ... Since this is a draft version of the spec going into an ESR release,
> > we intend to disable the feature for the ESR.
>
> hm, is the below re-statement, of the above apparently-conflicting
> statements, correct?
>
>   TLS 1.3, draft -16, will be enabled by default in the regular
>   Firefox 52 release.  Firefox 52 will also be available as an Extended
>   Support Release (ESR), which is made available separately. TLS 1.3
>   will be disabled by default in FF 52 ESR.
>
>
> =JeffH
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-25 Thread Eric Rescorla
On Wed, Oct 26, 2016 at 7:17 AM, Daniel Minor <dmi...@mozilla.com> wrote:

>
>
> On Tue, Oct 25, 2016 at 3:30 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> On Wed, Oct 26, 2016 at 6:17 AM, Chris Peterson <cpeter...@mozilla.com>
>> wrote:
>>
>> > On 10/25/2016 11:43 AM, Eric Rescorla wrote:
>> >
>> >> Setting aside the policy question, the location API for mobile devices
>> >> generally
>> >> gives a much more precise estimate of your location than can be
>> obtained
>> >> from the upstream network provider. For instance, consider the case of
>> the
>> >> ISP upstream from Mozilla's office in Mountain view: they can only
>> >> localize
>> >> a user to within 50 meters or so of the office, whereas GPS is
>> accurate to
>> >> a few meters. And of course someone who is upstream from the ISP may
>> just
>> >> have standard geo IP data.
>> >>
>> >
>> > Assuming every MITM and website already has approximate geo IP location,
>> > we could fuzz the navigator.getCurrentPosition() result for HTTP sites.
>> > That would leak no more information than passive geo IP and would not
>> break
>> > HTTP websites using the geolocation API.
>>
>>
>> This turns out to be incredibly hard.
>> https://tools.ietf.org/id/draft-thomson-geopriv-location-
>> obscuring-03.html
>>
>> If you want to do something like this, probably the best way to do it
>> would
>> be
>> to report the GeoIP from some public database based on the apparent
>> current
>> public IP.
>>
>> -Ekr
>>
>>
> Rather than fuzzing we could consider limiting the precision of the
> returned values for HTTP websites to something like a tenth of a degree.
> That would be enough to locate you in the right part of the world without
> giving much away (unless you happen to be very near a pole...).
>

This turns out not to work very well if you are near the grid lines and
moving at all.

I would strongly encourage anyone thinking of trying to design a new scheme
to first
read Martin's document, which covers the space pretty well

-Ekr


> Dan
>
>
>>
>> >
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-25 Thread Eric Rescorla
On Wed, Oct 26, 2016 at 6:17 AM, Chris Peterson <cpeter...@mozilla.com>
wrote:

> On 10/25/2016 11:43 AM, Eric Rescorla wrote:
>
>> Setting aside the policy question, the location API for mobile devices
>> generally
>> gives a much more precise estimate of your location than can be obtained
>> from the upstream network provider. For instance, consider the case of the
>> ISP upstream from Mozilla's office in Mountain view: they can only
>> localize
>> a user to within 50 meters or so of the office, whereas GPS is accurate to
>> a few meters. And of course someone who is upstream from the ISP may just
>> have standard geo IP data.
>>
>
> Assuming every MITM and website already has approximate geo IP location,
> we could fuzz the navigator.getCurrentPosition() result for HTTP sites.
> That would leak no more information than passive geo IP and would not break
> HTTP websites using the geolocation API.


This turns out to be incredibly hard.
https://tools.ietf.org/id/draft-thomson-geopriv-location-obscuring-03.html

If you want to do something like this, probably the best way to do it would
be
to report the GeoIP from some public database based on the apparent current
public IP.

-Ekr


>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-25 Thread Eric Rescorla
On Wed, Oct 26, 2016 at 5:24 AM, Aryeh Gregor  wrote:
>
> In this specific case, it seems that the usual candidates for MITMing
> (compromised Wi-Fi, malicious ISP) will already know the user's
> approximate location, because they're the ones who set up the Internet
> connection, and Wi-Fi has limited range.  What exactly is the scenario
> we're worried about here?
>

Setting aside the policy question, the location API for mobile devices
generally
gives a much more precise estimate of your location than can be obtained
from the upstream network provider. For instance, consider the case of the
ISP upstream from Mozilla's office in Mountain view: they can only localize
a user to within 50 meters or so of the office, whereas GPS is accurate to
a few meters. And of course someone who is upstream from the ISP may just
have standard geo IP data.

-Ekr

___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows XP and Vista Long Term Support Plan

2016-10-24 Thread Eric Rescorla
This seems to assume facts not in evidence, namely that people will stop
using those
machines rather than just living with whatever the last version we updated
them to.

Do we have any data that shows that that's true?

-Ekr


On Mon, Oct 24, 2016 at 1:12 AM, Gervase Markham  wrote:

> On 22/10/16 10:16, keithgallis...@gmail.com wrote:
> > My concern is that by killing digital certificate updates and TLS
> > updates, still in use machines whose main purpose is Internet access
> > are essentially bricked.
>
> This is a feature, not a bug. If those machines shouldn't be on the
> Internet, and things happen which keep them off the Internet, then hooray.
>
> Gerv
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Automotive Working Group

2016-10-18 Thread Eric Rescorla
I share Martin's concerns here...

There's fairly extensive evidence of security vulnerabilities in
vehicular systems that can lead to serious safety issues (see:
http://www.autosec.org/publications.html), so more than usual
attention needs to be paid to security in this context.

In fairness, a lot of these are implementation security issues:
i.e., how to properly firewall off any network access from the
CAN bus. You really need to ensure that there's no way
to influence the CAN bus, which probably means some kind of
very strong one-way communications guarantee. At some level
these are out of scope for this group, but it's predictable
that if this technology is built, people will also implement
it in insecure ways, so in that respect it's very much in-scope.

The commuications security story also seems to be not well
fleshed out. The examples shown all seem to have fixed hostnames
(wwwivi/localhost) which don't really seem like the basis for
strong identity. It's not just enough to say, as in S 6, that
the server has to have a certificate; what are the properties of
that certificate? What identity does it have?

This is particularly concerning:

At this point, internet based clients and servers do not know the
dynamic IP address that was assigned to a specific vehicle. So
normally, a vehicle has to connect to a well known endpoint,
generally using a URL to connect to a V2X Server. The vehicle and
the internet server typically mutually authenticate and the
vehicle 'registers' with the server over an encrypted channel
passing it a unique identifier e.g. its Vehicle Identification
Number (VIN). From that point on, the server has the IP address
that is currently assigned to a vehicle with a particular VIN, and
can share this information with other internet based clients and
servers, which are then able to send messages to the vehicle.

How does the V2X server know that this is actually my VIN? Just because
I claim it over an encrypted channel.

In IETF we often ask at the WG-forming stage whether we feel that the
community has the expertise to take on this work. The current proposal
seems to call that into question and absent some evidence that that
expertise does in fact exist, I believe we shoud oppose formation.

-Ekr


On Mon, Oct 17, 2016 at 5:11 PM, Martin Thomson  wrote:

> This seems to be a more specific instance of WoT.  As such, the goals
> are much clearer here.  While some of the concerns with the WoT
> charter apply (security in particular!), here are a few additional
> observations:
>
> Exposing the level of information that they claim to want to expose
> needs more privacy treatment than just a casual mention of the PIG.
>
> Websockets?  Protocol?  Both of these are red flags.  Protocol
> development is an entirely different game to APIs and the choice of
> websockets makes me question the judgment of the people involved.  Of
> particular concern is how the group intends to manage interactions
> with SOP.  Do they intend to allow the web at large to connect to
> parts of the car?  The security architecture is worrying in its lack
> of detail.
>
> If this proceeds, the naming choice (wwwivi) will have to change.  It
> is not OK to register a new GTLD (see
> https://tools.ietf.org/html/rfc6761).  A similar mistake was made
> recently in the IETF, and it was ugly. For people interested in the
> gory details, I can provide more details offline.
>
> On Tue, Oct 18, 2016 at 6:32 AM, L. David Baron  wrote:
> > The W3C is proposing a new charter for:
> >
> >   Automotive Working Group
> >   https://lists.w3.org/Archives/Public/public-new-work/2016Oct/0003.html
> >   https://www.w3.org/2014/automotive/charter-2016.html
> >
> > Mozilla has the opportunity to send comments or objections through
> > Monday, November 7.  However, I hope to be able to complete the
> > comments by Tuesday, October 25.
> >
> > Please reply to this thread if you think there's something we should
> > say as part of this charter review, or if you think we should
> > support or oppose it.
> >
> > Note that this is a new working group.  I don't know of anyone from
> > Mozilla being involved in the discussions that led to this charter.
> >
> > -David
> >
> > --
> > 턞   L. David Baron http://dbaron.org/   턂
> > 턢   Mozilla  https://www.mozilla.org/   턂
> >  Before I built a wall I'd ask to know
> >  What I was walling in or walling out,
> >  And to whom I was like to give offense.
> >- Robert Frost, Mending Wall (1914)
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Re: W3C Proposed Recommendation: WebIDL Level 1

2016-10-11 Thread Eric Rescorla
Speaking as someone who is at best a consumer of WebIDL, what's the
argument for doing a snapshot at this time?

-Ekr


On Mon, Oct 10, 2016 at 11:24 PM, Anne van Kesteren 
wrote:

> On Tuesday, 11 October 2016, L. David Baron  wrote:
>
> > But given that it is worthwhile to advance snapshots of stable
> > features to Recommendation every so often, is there a reason to
> > oppose this particular snapshot, even though it's not a suitable
> > target for implementation?
>
>
> It's not worthwhile objecting I think, I just wanted to share what is going
> on. I do suspect this will result in a fair amount of time wasting down the
> line due to groups refusing to use better features/syntax because they are
> not "stable" (and W3C Team encouraging such behavior). Nothing new with
> that though.
>
>
> --
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: mozilla/unused.h renamed to mozilla/Unused.h

2016-08-27 Thread Eric Rescorla
On Sat, Aug 27, 2016 at 8:53 AM, Gregory Szorc  wrote:

>
>
> > On Aug 27, 2016, at 07:09, Kan-Ru Chen  wrote:
> >
> >> On Sat, Aug 27, 2016, at 11:35 AM, Gregory Szorc wrote:
> >>> On Fri, Aug 26, 2016 at 8:27 PM, Steve Fink  wrote:
> >>>
>  On 08/26/2016 08:16 PM, Gregory Szorc wrote:
> 
> 
> > On Aug 26, 2016, at 19:54, Kan-Ru Chen  wrote:
> >
> > Hello,
> >
> > In Bug 1297276 I landed a patch to rename mozilla/unused.h to
> > mozilla/Unused.h to make it more consistent with our other MFBT
> headers.
> > Normally rename a header shouldn't cause too much trouble, however
> this
> > rename is only changing the case so you might experience some
> problems
> > on case insensitive filesystem.
> >
> > As pointed out by Tim in
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1297276#c19 if you use
> > |git pull -f| to update local copy of gecko and git refuse to, you
> can
> > rm mfbt/unused.* first to make git happy.
>  Case only renames cause a lot of havoc. Somewhere there is an open
> bug to
>  implement a server side hook to reject them.
> 
>  What I'm trying to say is thank you for reminding me to implement the
>  hook. And congratulations on likely being the last person to perform
> a case
>  only rename on the repo.
> >>>
> >>> For the record, is there a better way to accomplish this? In this
> >>> particular case, it seems like we really do want the rename. Would it
> work
> >>> better to do two commits, one from mozilla/unused.h ->
> >>> mozilla/LucyTheDancingFerret.h, then another doing
> >>> mozilla/LucyTheDancingFerret.h -> mozilla/Unused.h?
> >>
> >>
> >> No. Because if you perform an update/checkout across the rename, you may
> >> encounter problems. This can happen when bisecting, for example.
> >>
> >> Now, this isn't as bad as a case folding collision where you have both
> >> unused.h and Unused.h: some filesystems just flat out refuse that.
> >>
> >> At least with a rename your VCS has the opportunity to delete the old
> >> file
> >> before creating the different cased one. But even then, build systems
> and
> >> other tools can be confused because some filesystems are case
> insensitive
> >> but case preserving and tools may make inappropriate decisions about
> >> whether "unused.h" and "Unused.h" are the same file. There's a lot that
> >> can
> >> go wrong. So the whole scenario is best avoided.
> >
> > So given the fact people can make mistakes, what would you suggest if
> > one really wants to fix the filename? I hope the hook will display such
> > suggestions instead of just say "don't do this."
>
> The hook will likely link to a web page explaining in more detail what to
> do.
>
> While there is always an option to override hooks, sadly in this case I
> don't think we'll allow it too often. This also means - rather
> unfortunately - that if someone makes a casing mistake it will forever be
> enshrined in the repo.


I suppose we can deal with these issues as they come up, but I don't think
it's at all obvious that build/version control issues required to make the
transition work outweigh the engineering ergonomics issues of having
consistency in the code, so consider this a note that there's not really
consensus that leaving these mistakes enshrined forever is good policy.

-Ekr





> The best way to avoid this is linting tests and code review. With the
> autoland repo, we'll soon drop bad commits instead of backing them out. So
> if something in automation finds case problems, the commit effectively
> never existed and there won't be a problem (beyond automation).
>
> One could whip up a filename auditing TaskCluster task pretty easily.
> Essentially copy https://hg.mozilla.org/mozilla-central/rev/f5e13a9a2e36.
> I'll happily review it.
>
> Also, I'm sorry you had to fall into this mostly unmarked trap. Others
> have likely done similar things unknowingly. You did the right thing and
> sent an email to notify people. That also happened to remind me that we
> need to fix things :) Please don't feel bad about what you did: it wasn't
> your fault. Blame the lack of tools that didn't catch this. And blame me
> for having a bug (assigned to me even!) to implement one of those tools.
>
> >
> >> The bug tracking the server hook is 797962 btw.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New [must_use] property in XPIDL

2016-08-23 Thread Eric Rescorla
Fair enough. I wouldn't be against introducing a separate unused marker for
this purpose.

-Ekr


On Tue, Aug 23, 2016 at 6:40 AM, Benjamin Smedberg <benja...@smedbergs.us>
wrote:

> cast-to-void is commonly suggested as an alternative to an explicit unused
> marking, and it is something that I wanted to use originally.
> Unfortunately, we have not been able to make that work: this is primarily
> because compilers often remove the cast-to-void as part of the parsing
> phase, so it's not visible in the parse tree for static checkers.
>
> --BDS
>
> On Tue, Aug 23, 2016 at 9:19 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> I'm indifferent to this particular case, but I think that rkent's point
>> about static
>> checking is a good one. Given that C++ has existing annotations that say:
>>
>> - This does not produce a useful return value (return void)
>> - I am explicitly ignoring the return value (cast to void)
>>
>> And that we have (albeit imperfect) static checking tools that can detect
>> non-use of
>> return values, it seems like we would ultimately be better-off by using
>> those tools
>> to treat use of the return value by the default flagging a compiler error
>> when that
>> doesn't happen yet a third annotation rather than treating "use the return
>> value
>> somehow" as the default and flagging a compiler error when that doesn't
>> happen.
>> I appreciate that we have a lot of code that violates this rule now, so
>> actually cleaning
>> that up is a long process gradually converting the code base, but it has
>> the advantage
>> that once that's done then it just stays clean (just like any other
>> -Werror
>> conversion).
>>
>> -Ekr
>>
>>
>> On Mon, Aug 22, 2016 at 5:03 PM, Bobby Holley <bobbyhol...@gmail.com>
>> wrote:
>>
>> > On Mon, Aug 22, 2016 at 4:39 PM, R Kent James <k...@caspia.com> wrote:
>> >
>> > > On 8/21/2016 9:14 PM, Nicholas Nethercote wrote:
>> > > > I strongly encourage people to do likewise on
>> > > > any IDL files with which they are familiar. Adding appropriate
>> checks
>> > > isn't
>> > > > always easy
>> > >
>> > > Exactly, and I hope that you and others restrain your exuberance a
>> > > little bit for this reason. A warning would be one thing, but a hard
>> > > failure that forces developers to drop what they are doing and think
>> > > hard about an appropriate check is just having you set YOUR priorities
>> > > for people rather than letting people do what might be much more
>> > > important work.
>> > >
>> > > There's lots of legacy code around that may or may not be worth the
>> > > effort to think hard about such failures. This is really better suited
>> > > for a static checker that can make a list of problems, which list can
>> be
>> > > managed appropriately for priority, rather than a hard error that
>> forces
>> > > us to drop everything.
>> > >
>> >
>> > I don't quite follow the objection here.
>> >
>> > Anybody who adds such an annotation needs to get the tree green before
>> they
>> > land the annotation. Developers writing new code that ignores the
>> nsresult
>> > will get instant feedback (by way of try failure) that the developer of
>> the
>> > API thinks the nsresult needs to be checked. This doesn't seem like an
>> > undue burden, and enforced-by-default assertions are critical to code
>> > hygiene and quality.
>> >
>> > If your concern is the way this API change may break Thunderbird-only
>> > consumers of shared XPCOM APIs, that's related to Thunderbird being a
>> > non-Tier-1 platform, and pretty orthogonal to the specific change that
>> Nick
>> > made.
>> >
>> > bholley
>> >
>> >
>> >
>> > > :rkent
>> > > ___
>> > > dev-platform mailing list
>> > > dev-platform@lists.mozilla.org
>> > > https://lists.mozilla.org/listinfo/dev-platform
>> > >
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New [must_use] property in XPIDL

2016-08-23 Thread Eric Rescorla
I'm indifferent to this particular case, but I think that rkent's point
about static
checking is a good one. Given that C++ has existing annotations that say:

- This does not produce a useful return value (return void)
- I am explicitly ignoring the return value (cast to void)

And that we have (albeit imperfect) static checking tools that can detect
non-use of
return values, it seems like we would ultimately be better-off by using
those tools
to treat use of the return value by the default flagging a compiler error
when that
doesn't happen yet a third annotation rather than treating "use the return
value
somehow" as the default and flagging a compiler error when that doesn't
happen.
I appreciate that we have a lot of code that violates this rule now, so
actually cleaning
that up is a long process gradually converting the code base, but it has
the advantage
that once that's done then it just stays clean (just like any other -Werror
conversion).

-Ekr


On Mon, Aug 22, 2016 at 5:03 PM, Bobby Holley  wrote:

> On Mon, Aug 22, 2016 at 4:39 PM, R Kent James  wrote:
>
> > On 8/21/2016 9:14 PM, Nicholas Nethercote wrote:
> > > I strongly encourage people to do likewise on
> > > any IDL files with which they are familiar. Adding appropriate checks
> > isn't
> > > always easy
> >
> > Exactly, and I hope that you and others restrain your exuberance a
> > little bit for this reason. A warning would be one thing, but a hard
> > failure that forces developers to drop what they are doing and think
> > hard about an appropriate check is just having you set YOUR priorities
> > for people rather than letting people do what might be much more
> > important work.
> >
> > There's lots of legacy code around that may or may not be worth the
> > effort to think hard about such failures. This is really better suited
> > for a static checker that can make a list of problems, which list can be
> > managed appropriately for priority, rather than a hard error that forces
> > us to drop everything.
> >
>
> I don't quite follow the objection here.
>
> Anybody who adds such an annotation needs to get the tree green before they
> land the annotation. Developers writing new code that ignores the nsresult
> will get instant feedback (by way of try failure) that the developer of the
> API thinks the nsresult needs to be checked. This doesn't seem like an
> undue burden, and enforced-by-default assertions are critical to code
> hygiene and quality.
>
> If your concern is the way this API change may break Thunderbird-only
> consumers of shared XPCOM APIs, that's related to Thunderbird being a
> non-Tier-1 platform, and pretty orthogonal to the specific change that Nick
> made.
>
> bholley
>
>
>
> > :rkent
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebRTC connections do not trigger content policies. Should they?

2016-06-21 Thread Eric Rescorla
On Tue, Jun 21, 2016 at 12:30 PM, Daniel Veditz <dved...@mozilla.com> wrote:

> On Sat, Jun 18, 2016 at 6:37 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
> > instead of having it sourced from the
> > ​ ​
> > advertiser's
> > ​ ​
> > origin, they instead stand
> > up ".publisher.example.com"
> > ​ ​
> > and
> > ​ ​
> > point
> > ​ ​
> > it at the advertiser's
> > IP addresses (via an A record to the advertiser's
> > ​ ​
> > name never
> > ​ ​
> > appears).
> > ​
> > ​
> > This would have a similar cost/effort structure to using data
> > ​ ​
> > channels and
> > would similarly not be blocked by current domain-based ad blockers.
> >
>
> ​Most ad-blockers (ABP, certainly) could easily block *.
> publisher.example.com and whitelist the handful of
> foo.publisher.example.com
> and bar.publisher.example.com, etc. needed to make the site "work".
>

That seems like a very expensive manual process.

-Ekr


> wrt nsIContentPolicy, though, I'm not sure where a PeerConnection fits in.
>
> -Dan Veditz
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebRTC connections do not trigger content policies. Should they?

2016-06-18 Thread Eric Rescorla
On Sat, Jun 18, 2016 at 4:55 PM, Anne van Kesteren <ann...@annevk.nl> wrote:

> On Sat, Jun 18, 2016 at 2:37 PM, Eric Rescorla <e...@rtfm.com> wrote:
> > The priority of this proposed feature seems to depend rather a lot on
> > whether enough
> > advertisers are using WebRTC to deliver ads to make it worth some ad
> > blocker being
> > interest in adding such a blocker. Do we have any evidence on this front?
>
> Isn't the problem more that if you use CSP to block outgoing
> connections, WebRTC can be used for exfiltration during XSS?


That wasn't the concern I understood Paul to be raising:
"for example, ad blockers use content policy to block ads".

With that said, this does seem like a potential problem, though perhaps a
more
tractable one, in that the CSP restrictions are whitelists rather than
blacklists.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebRTC connections do not trigger content policies. Should they?

2016-06-18 Thread Eric Rescorla
The priority of this proposed feature seems to depend rather a lot on
whether enough
advertisers are using WebRTC to deliver ads to make it worth some ad
blocker being
interest in adding such a blocker. Do we have any evidence on this front?

It's worth noting that from a security and tracking perspective, ads
delivered via this
mechanism look a lot more like first party ads (they are in the origin of
the main
page and do not get cookies from the advertiser's origin). So, in that
respect all
you're really doing is making it possible for the advertiser to send the
data directly
to the user rather than tunnelling it through the publisher. That's
potentially of some
value, but perhaps not an enormous amount.

As a thought experiment, consider an ad serving design in which publishers
include
ad content as they do now but instead of having it sourced from the
advertiser's
origin, they instead stand up ".publisher.example.com" and
point
it at the advertiser's IP addresses (via an A record to the advertiser's
name never
appears). This would have a similar cost/effort structure to using data
channels and
would similarly not be blocked by current domain-based ad blockers. What
would our
expectations be here?

-Ekr


On Sat, Jun 18, 2016 at 10:50 AM, Paul Ellenbogen 
wrote:

> On Fri, Jun 17, 2016 at 6:43 PM, Jan-Ivar Bruaroey 
> wrote:
>
> > Data channels are modeled on web sockets, and I see we do this for web
> > sockets. https://bugzil.la/692067
> >
> > However, data channels are typically opened to other *clients*, not
> > servers.
> >
>
> While WebRTC is typically used to connect between clients, this is by no
> means necessary. My understanding is that anyone can set up a server that
> accepts WebRTC data channel connections.
>
>
> >
> > What would the ContentLocation URI be in this case? The (dynamic) IP used
> > to reach the other client?
> >
>
> I think it would IP addresses be in most cases, unless ice candidates can
> be urls too.
>
> >
> > This seems easily circumvented by routing data through another client
> that
> > doesn't use content policy.
>
>
> In the advertising example, this means an advertiser would have to push
> this new IP to ALL publishers on their platform. In the example I proposed,
> the advantage to publishers is that they only need to paste in a snippet of
> javascript with hard coded ICE candidates. This means they don't require
> anything sophisticated on the backend to serve ads.
>
> Using dynamic IPs means that publishers would need to A) regularly paste in
> a new version of the advertising code or B) set up a backend to fetch those
> IPs regularly and update the hard coded values. Either of those are much
> more complicated for the publisher when compared to just pasting in
> javascript.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Return-value-optimization when return type is RefPtr

2016-06-16 Thread Eric Rescorla
On Thu, Jun 16, 2016 at 2:15 PM, Michael Layzell <mich...@thelayzells.com>
wrote:

> We pass T* as an argument to a function too often for this to e practical.
>

Can you explain why? Is your point here to avoid people having to type
.get() or the mass
conversion from the current code? The former seems like it's a feature
rather than a bug...

-Ekr

The best solution is probably to remove the conversion from RefPtr&& to
> T* which is I believe what froydnj is planning to do.
>
> This requires ref qualifiers for methods, which isn't supported in MSVC
> 2013, but is supported in 2015. See
> https://bugzilla.mozilla.org/show_bug.cgi?id=1280295
>
> On Thu, Jun 16, 2016 at 12:27 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> Is it worth reconsidering removing implicit conversion from RefPtr to
>> T*?
>>
>> -Ekr
>>
>>
>> On Thu, Jun 16, 2016 at 9:50 AM, Boris Zbarsky <bzbar...@mit.edu> wrote:
>>
>> > On 6/16/16 3:15 AM, jww...@mozilla.com wrote:
>> >
>> >> I think that is the legacy when we don't have move semantics. Returning
>> >> RefPtr won't incur ref-counting overhead and is more expressive and
>> >> functional.
>> >>
>> >
>> > Except for the footgun described in <
>> > https://bugzilla.mozilla.org/show_bug.cgi?id=1280296#c0>, yes?
>> >
>> > -Boris
>> >
>> >
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Return-value-optimization when return type is RefPtr

2016-06-16 Thread Eric Rescorla
Is it worth reconsidering removing implicit conversion from RefPtr to T*?

-Ekr


On Thu, Jun 16, 2016 at 9:50 AM, Boris Zbarsky  wrote:

> On 6/16/16 3:15 AM, jww...@mozilla.com wrote:
>
>> I think that is the legacy when we don't have move semantics. Returning
>> RefPtr won't incur ref-counting overhead and is more expressive and
>> functional.
>>
>
> Except for the footgun described in <
> https://bugzilla.mozilla.org/show_bug.cgi?id=1280296#c0>, yes?
>
> -Boris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-05-31 Thread Eric Rescorla
Also, perhaps function name (__func__) or one of the pretty versions.

-Ekr


On Tue, May 31, 2016 at 1:20 PM, Jeff Gilbert  wrote:

> On Tue, May 31, 2016 at 6:18 AM, Gabriele Svelto 
> wrote:
> > On 31/05/2016 13:26, Gijs Kruitbosch wrote:
> >> We could do a find/replace of no-arg calls to a new macro that uses
> >> MOZ_CRASH with a boilerplate message, and make the argument non-optional
> >> for new uses of MOZ_CRASH? That would avoid the problem for new
> >> MOZ_CRASH() additions, which seems like it would be wise so the problem
> >> doesn't get worse? Or is it not worth even that?
> >
> > What about adding file/line number information? This way one could
> > always tell where it's coming from even if it doesn't have a descriptive
> > string.
>
> Agreed! These queries are much more useful if they have file names.
> Line numbers are a plus, but I agree that since these drift, they are
> not useful for collating. File names will generally not drift, and
> would make these queries much easier to grep for problems originating
> from code we're responsible for.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All about crashes

2016-05-25 Thread Eric Rescorla
On Wed, May 25, 2016 at 8:53 AM, Steve Fink <sf...@mozilla.com> wrote:

> On 05/25/2016 06:09 AM, Eric Rescorla wrote:
>
>> Under "Ways to prevent" you suggest
>> "Ways to prevent (by making them impossible)" and rewriting in JS or Rust,
>> using smart pointers, etc.
>>
>> This may prevent crashes in the narrow sense that it prevents SEGVs, etc.
>> but it does not make runtime errors that lead to program shutdown
>> impossible. To take an example, even if a C++ program only uses smart
>> pointers, it is still possible to have null pointer dereferences, which
>> then cause program shutdown. Runtime errors are also unrecoverable runtime
>> errors are also possible JS/Rust. I don't disagree that safer languages
>> eliminate a large class of crashes, but they don't make them impossible.
>>
>>
> I did not read that as making *all* defects impossible, rather that it was
> talking about preventing defects, and one such approach is to use a
> mechanism (Rust, smart pointers) that makes certain types of defects
> impossible.


It's not a matter of defects versus non-defects. It's a matter of abnormal
program
termination versus non-termination.

-Ekr


It is accurate to say that you are preventing some defects. (And I imagine
> that even with Rust, you are making other types of defects possible or just
> more likely. But in balance, rewriting C++ code in Rust is a valid way to
> prevent certain defects.)
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All about crashes

2016-05-25 Thread Eric Rescorla
Under "Ways to prevent" you suggest
"Ways to prevent (by making them impossible)" and rewriting in JS or Rust,
using smart pointers, etc.

This may prevent crashes in the narrow sense that it prevents SEGVs, etc.
but it does not make runtime errors that lead to program shutdown
impossible. To take an example, even if a C++ program only uses smart
pointers, it is still possible to have null pointer dereferences, which
then cause program shutdown. Runtime errors are also unrecoverable runtime
errors are also possible JS/Rust. I don't disagree that safer languages
eliminate a large class of crashes, but they don't make them impossible.

-Ekr



On Mon, May 23, 2016 at 9:56 PM, Nicholas Nethercote  wrote:

> Greetings,
>
> I've written a document called "All about crashes" which I've put on
> the Project Uptime wiki:
>
> https://wiki.mozilla.org/Platform/Uptime#All_about_crashes
>
> It's about all the different ways we can discover, diagnose, and
> address crashes. It's intended to be a comprehensive, because I want
> to use it to help identify and prioritise all the ways we could do
> better.
>
> I would appreciate any feedback people might have. I'm sure I have
> gotten some things wrong, and omitted some things. Thanks in advance.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to (sort of) unship SSLKEYLOGFILE logging

2016-04-26 Thread Eric Rescorla
On the topic of debugging, it's worth noting that TLS 1.3 is going to be
quite a bit harder to debug from just network traces (without keying
material) than TLS 1.2 was because more of the traffic is encrypted.

-Ekr


On Tue, Apr 26, 2016 at 6:30 AM, Patrick McManus 
wrote:

> I don't think the case for making this change (even to release builds) has
> been successfully made yet and the ability to debug and iterate on the
> quality of the application network stack is hurt by it.
>
> The Key Log - in release builds - is part of the debugging strategy and is
> used fairly commonly in the network stack diagnostics. The first line of
> defense is dev tools, the second is NSPR logging, and the third is
> wireshark with a key log because sometimes what is logged is not what is
> really happening on the 'wire' (thus the need to troubleshoot).
>
> Bug reporters are often not developers and sometimes do not have the option
> of (or willingness to) running other builds. Removing functionality that
> helps with that is damaging to our strategic goal of building our Core and
> emphasizing quality. Bug 1188657 suggests that this functionality is for
> diagnosing tricky TLS bugs, but its just as helpful for diagnosing anything
> using TLS which we of course hope to make be everything.
>
> But of course if it represents a security hole then it is medicine that
> needs to be swallowed - I wouldn't argue against that. That's why I say the
> case hasn't been made yet.
>
> The mechanism requires machine level control to enable - the same level of
> control that can alter the firefox binary, or annotate the CA root key
> store or any number of other well understood things. Daniel suggests that
> Chrome will keep this functionality. The bug 1183318 handwaves around
> social engineering attacks against this - but of course that's the same
> vector for machine level control of those other attacks as well - I don't
> see anything really improved by making this change, but our usability and
> ability to iterate on quality are damaged. Maybe I'm mis understanding the
> attack this change ameliorates?
>
> Minimally we should be having this discussion about a change in
> functionality for  Firefox 49 - not something that just moved up a
> release-train channel.
>
> Lastly, as a more strategic point I think reducing the tooling around HTTPS
> serves to dis-incentivize HTTPS. Obviously, we don't want to do that.
> Sometimes there are tradeoffs to be made, I'm skeptical of this one though.
>
>
> On Tue, Apr 26, 2016 at 12:44 AM, Martin Thomson  wrote:
>
> > In NSS, we have landed bug 1183318 [1], which I expect will be part of
> > Firefox 48.
> >
> > This disables the use of the SSLKEYLOGFILE environment variable in
> > optimized builds of NSS.  That means all released Firefox channels
> > won't have this feature as it rides the trains.
> >
> > This feature is sometimes used to extract TLS keys for decrypting
> > Wireshark traces [2].  The landing of this bug means that it will no
> > longer be possible to log all your secret keys unless you have a debug
> > build.
> >
> > This is a fairly specialized thing to want to do, and weighing
> > benefits against risks in this case is an exercise in comparing very
> > small numbers, which is hard.  I realize that this is very helpful for
> > a select few people, but we decided to take the safe option in the
> > absence of other information.
> >
> > (I almost forgot to send this, but then [3] reminded me in a very
> > timely fashion.)
> >
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1183318
> > [2]
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
> > [3]
> > https://lists.mozilla.org/pipermail/dev-platform/2016-April/014573.html
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Out parameters, References vs. Pointers (was: Proposal: use nsresult& outparams in constructors to represent failure)

2016-04-22 Thread Eric Rescorla
I agree with this.

FWIW, the Google style guide requires that reference params be const.
https://google.github.io/styleguide/cppguide.html#Reference_Arguments

-Ekr


On Thu, Apr 21, 2016 at 9:51 PM, Jeff Gilbert  wrote:

> Pointers are prefereable for outparams because it makes it clearer
> what's going on at the callsite. (at least indicating that something
> non-trivial is happening)
>
> On Wed, Apr 20, 2016 at 8:07 PM, Kan-Ru Chen (陳侃如) 
> wrote:
> > Nicholas Nethercote  writes:
> >
> >> Hi,
> >>
> >> C++ constructors can't be made fallible without using exceptions. As a
> result,
> >> for many classes we have a constructor and a fallible Init() method
> which must
> >> be called immediately after construction.
> >>
> >> Except... there is one way to make constructors fallible: use an
> |nsresult&
> >> aRv| outparam to communicate possible failure. I propose that we start
> doing
> >> this.
> >
> > Current coding style guidelines suggest that out parameters should use
> > pointers instead of references. The suggested |nsresult&| will be
> > consistent with |ErrorResult&| usage from DOM but against many other out
> > parameters, especially XPCOM code.
> >
> > Should we special case that nsresult and ErrorResult as output
> > parameters should always use references, or make it also the default
> > style for out parameters?
> >
> > I think this topic has been discussed before didn't reach a
> > consensus. Based the recent effort to make the code using somewhat
> > consistent style, should we expend on this on the wiki?
> >
> >Kanru
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-21 Thread Eric Rescorla
On Thu, Apr 21, 2016 at 7:57 AM, Nicholas Nethercote <n.netherc...@gmail.com
> wrote:

> On Thu, Apr 21, 2016 at 3:05 PM, Eric Rescorla <e...@rtfm.com> wrote:
> > The general problem that
> > it doesn't alleviate is that failure to check the return value leaves you
> > with a reference/pointer to an object in an ill-defined half-constructed
> > state. At least for heap allocations, I'd much rather have the property
> that
> > failures leave you with a null pointer.
>
> First of all, with neither approach do you end up with a null pointer,
> at least not in Gecko where we have infallible new. So let's ignore
> that sentence.
>

No, let's not ignore that sentence. I'm aware that the current idioms don't
provide that. I'm saying that if we are going to bother to add a new idiom
it should have this property.


> So, if we are going to do something along these lines, I would want it to
> be
> > a convention that if you use MakeUnique and the like (as you should) then
> > they automatically validate correct construction and if not return an
> empty
> > pointer.
>
> MakeUnique() just allocates and calls the constructor. If you have
> fallible steps in your intialization then MakeUnique() won't help you;
> you'll still have either call your Init() function afterwards or check
> your nsresult& outparam. So that's not relevant.
>

Yes, what I'm saying is that we should modify MakeUnique so that if a
constructor has the form you are recommending, it should end up with an
empty UniquePtr rather than requiring you to explicitly check the result.



Maybe you're referring to factory methods,


I'm not.



> like this:
>
>   static T* T::New();
>
> which would return null on failure. Such methods can be useful, but
> there's two problems. First, they're not applicable to stack-allocated
> objects. Second, you still have to do your fallible initialization
> *within* the factory method, and so you still have to choose with
> either constructor+Init or constructor+outparam, so you still have to
> make a choice.
>

This is why I didn't suggest it.

-Ekr


>
> Nick
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-20 Thread Eric Rescorla
I'm sympathetic to the desire to have a single fallible construction
function (this is generally how I structure things in C code), but I'm not
sure that this is really the right design for it. The general problem that
it doesn't alleviate is that failure to check the return value leaves you
with a reference/pointer to an object in an ill-defined half-constructed
state. At least for heap allocations, I'd much rather have the property
that failures leave you with a null pointer.

So, if we are going to do something along these lines, I would want it to
be a convention that if you use MakeUnique and the like (as you should)
then they automatically validate correct construction and if not return an
empty pointer.

-Ekr











On Thu, Apr 21, 2016 at 3:07 AM, Nicholas Nethercote  wrote:

> Hi,
>
> C++ constructors can't be made fallible without using exceptions. As a
> result,
> for many classes we have a constructor and a fallible Init() method which
> must
> be called immediately after construction.
>
> Except... there is one way to make constructors fallible: use an |nsresult&
> aRv| outparam to communicate possible failure. I propose that we start
> doing
> this.
>
> Here's an example showing stack allocation and heap allocation. Currently,
> we
> do this (boolean return type):
>
>   T ts();
>   if (!ts.Init()) {
> return NS_ERROR_FAILURE;
>   }
>   T* th = new T();
>   if (!th.Init()) {
> delete th;
> return NS_ERROR_FAILURE;
>   }
>
> or this (nsresult return type):
>
>   T ts();
>   nsresult rv = ts.Init();
>   if (NS_FAILED(rv)) {
> return rv;
>   }
>   T* th = new T();
>   rv = th.Init();
>   if (NS_FAILED(rv)) {
> delete th;
> return rv;
>   }
>
> (In all the examples you could use a smart pointer to avoid the explicit
> |delete|. This doesn't affect my argument in any way.)
>
> Instead, we would do this:
>
>   nsresult rv;
>   T ts(rv);
>   if (NS_FAILED(rv)) {
> return rv;
>   }
>   T* th = new T(rv);
>   if (NS_FAILED(rv)) {
> delete th;
> return rv;
>   }
>
> For constructors with additional argument, I propose that the |nsresult&|
> argument go last.
>
> Using a bool outparam would be possible some of the time, but I suggest
> always
> using nsresult for consistency, esp. given that using bool here would be no
> more concise.
>
> SpiderMonkey is different because (a) its |operator new| is fallible and
> (b) it
> doesn't use nsresult. So for heap-allocated objects we *would* use bool,
> going
> from this:
>
>   T* th = new T();
>   if (!th) {
> return false;
>   }
>   if (!th.Init()) {
> delete th;
> return false;
>   }
>
> to this:
>
>   bool ok;
>   T* th = new T(ok);
>   if (!th || !ok) {
> delete th;
> return false;
>   }
>
> These examples don't show inheritance, but this proposal works out
> straightforwardly in that case.
>
> The advantages of this proposal are as follows.
>
> - Construction is atomic. It's not artificially split into two, and
> there's no
>   creation of half-initialized objects. This tends to make the code nicer
>   overall.
>
> - Constructors are special because they have initializer lists -- there are
>   things you can do in initializer lists that you cannot do in normal
>   functions. In particular, using an Init() function prevents you from
> using
>   references and |const| for some members. This is bad because references
> and
>   |const| are good things that can make code more reliable.
>
> - There are fewer things to forget at call sites. With our current
> approach you
>   can forget (a) to call Init(), and (b) to check the result of
> Init(). With this
>   proposal you can only forget to check |rv|.
>
> The only disadvantage I can see is that it looks a bit strange at first.
> But if
> we started using it that objection would quickly go away.
>
> I have some example patches that show what this code pattern looks like in
> practice. See bug 1265626 parts 1 and 4, and bug 1265965 part 1.
>
> Thoughts?
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies

2016-04-14 Thread Eric Rescorla
This seems like the right question.

-Ekr

On Thu, Apr 14, 2016 at 9:17 AM, Kyle Huey  wrote:

> Why should we be the ones to take the web compat hit on this?
>
> - Kyle
> On Apr 14, 2016 1:55 AM, "Chris Peterson"  wrote:
>
> > Summary: Treat cookies set over non-secure HTTP as session cookies
> >
> > Exactly one year ago today (!), Henri Sivonen proposed [1] treating
> > cookies without the `secure` flag as session cookies.
> >
> > PROS:
> >
> > * Security: login cookies set over non-secure HTTP can be sniffed and
> > replayed. Clearing those cookies at the end of the browser session would
> > force the user to log in again next time, reducing the window of
> > opportunity for an attacker to replay the login cookie. To avoid this,
> > login-requiring sites should use HTTPS for at least their login page that
> > set the login cookie.
> >
> > * Privacy: most ad networks still use non-secure HTTP. Content sites that
> > use these ad networks are prevented from deploying HTTPS themselves
> because
> > of HTTP/HTTPS mixed content breakage. Clearing user-tracking cookies set
> > over non-secure HTTP at the end of every browser session would be a
> strong
> > motivator for ad networks to upgrade to HTTPS, which would unblock
> content
> > sites' HTTPS rollouts.
> >
> > However, my testing of Henri's original proposal shows that too few sites
> > set the `secure` cookie flag for this to be practical. Even sites that
> > primarily use HTTPS, like google.com, omit the `secure` flag for many
> > cookies set over HTTPS.
> >
> > Instead, I propose treating all cookies set over non-secure HTTP as
> > session cookies, regardless of whether they have the `secure` flag.
> Cookies
> > set over HTTPS would be treated as "secure so far" and allowed to persist
> > beyond the current browser session. This approach could be tightened so
> any
> > "secure so far" cookies later sent over non-secure HTTP could be
> downgraded
> > to session cookies. Note that Firefox's session restore will persist
> > "session" cookies between browser restarts for the tabs that had been
> open.
> > (This is "eternal session" feature/bug 530594.)
> >
> > To test my proposal, I loaded the home pages of the Alexa Top 25 News
> > sites [2]. These 25 pages set over 1300 cookies! Fewer than 200 were set
> > over HTTPS and only 7 had the `secure` flag. About 900 were third-party
> > cookies. Treating non-secure cookies as session cookies means that over
> > 1100 cookies would be cleared at the end of the browser session!
> >
> > CONS:
> >
> > * Sites that allow users to configure preferences without logging into an
> > account would forget the users' preferences if they are not using HTTPS.
> > For example, companies that have regional sites would forget the user's
> > selected region at the end of the browser session.
> >
> > * Ad networks' opt-out cookies (for what they're worth) set over
> > non-secure HTTP would be forgotten at the end of the browser session.
> >
> > Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1160368
> >
> > Link to standard: N/A
> >
> > Platform coverage: All platforms
> >
> > Estimated or target release: Firefox 49
> >
> > Preference behind which this will be implemented:
> > network.cookie.lifetime.httpSessionOnly
> >
> > Do other browser engines implement this? No
> >
> > [1]
> >
> https://groups.google.com/d/msg/mozilla.dev.platform/xaGffxAM-hs/aVgYuS3QA2MJ
> > [2] http://www.alexa.com/topsites/category/Top/News
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Triage Plan for Firefox Components

2016-04-08 Thread Eric Rescorla
Hmm.. Well, whether platform adopts this universally or not seems like a
question for
Doug and Johnny. Though, I'm sure they'll take your input seriously.

Doug, do you have any thoughts on how long an evaluation period you think is
appropriate here?

-Ekr



On Fri, Apr 8, 2016 at 3:25 PM, Emma Humphries  wrote:

> And yes, that's what I mean by Platform.
>
> Thanks.
>
> On Fri, Apr 8, 2016 at 11:24 AM, Andrew McCreight 
> wrote:
>
> > Emma can correct me if I'm wrong, but I think she is using "Firefox" in
> > the non-jargony sense of the entire thing we're shipping in Firefox,
> > including Gecko. We've been using this system for a month or so in DOM. I
> > think it has been going well. Anybody who is interested can ask Andrew
> > Overholt or I for details.
> >
> > Andrew
> >
> > On Fri, Apr 8, 2016 at 10:52 AM, Douglas Turner 
> wrote:
> >
> >> Emma,
> >>
> >> Thanks for doing this.
> >>
> >> I'm not sure whether something like this would work for platform
> >> engineering, but we'll keep an eye how things develop with Firefox and
> >> might consider it once we have some experience there.  I encourage you
> to
> >> report back here when Firefox starts using this system and has some
> lessons
> >> learned.
> >>
> >> I also want to thank all of the people that participated in this
> >> conversation.
> >>
> >> Doug
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> >>
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Triage Plan for Firefox Components

2016-04-06 Thread Eric Rescorla
Sorry to be dense, but if I understand correctly, you'd like to:

1. Have a policy that all of Gecko needs to triage bugs in a certain way.
2. Redefine how everyone defines priorities?

And you think that 24 hours is enough time to get consensus on that.

Do I have that right?

-Ekr


On Wed, Apr 6, 2016 at 10:47 PM, Emma Humphries  wrote:

> Following up on yesterday's email: I put together a draft second proposal
> and shopped it around some, and now I want to bring that back into the main
> discussion.
>
> The bullet point version of this is:
>
> * Add a binary field that components can use, TRIAGED (Y/N, T/F, +,-)
> * In the case of Firefox related components, have a consistent definition
> of P1-P5 and make sure that triaged bugs have a Priority assigned
>
> This has a couple of implications:
>
> This means I have to have a plan in place for dealing with Priority going
> from ad-hoc to one set of meanings for bugs in Firefox components.
>
> If any of you have worked with longitudinal social sciences data sets,
> this is a common thing. Values for fields change over time, and researcher
> consult documentation so that code consuming the data would work with the
> discontinuities. Some of this can be handled through bugzilla UI, so that
> components using the TRIAGED flag would display the description
> corresponding to P1-P5 and other components would not.
>
> We also need to go through all the existing whiteboard, keywords, and
> custom flags we are using for e10s and other projects that are being used
> to indicate importance.
>
> I'd like to finish up feedback on by the end of the working day Thursday
> the 6th (PST.)
>
Then we'll get to work on a solid specification for the work so we can
> start implementation sometime in Q2.
>
> Thanks.
>
> -- Emma
>
> On Tue, Apr 5, 2016 at 5:27 PM, Emma Humphries  wrote:
>
>> It's been a week since I asked for your comments on the plan for triage,
>> thank you.
>>
>> I'm going reply to some general comments on the plan, and outline next
>> steps.
>>
>> Ekt and others said that up to now, individual teams have owned how they
>> triage and prioritized bugs. Mozilla has made commitments to how we are
>> going to follow up with people filing bugs. Thus we need consistent
>> decisions across all the components that go into Firefox about bugs that we
>> can share back to non-Mozillans on bugs they file, so that we can get them
>> to contribute more high-quality bugs, and participate in other efforts in
>> support of the project and the Open Web. I'm aware I'm asking teams with
>> existing process to make a change, but it's for a global gain.
>>
>> Several people pointed out all the fields in Bugzilla that have and could
>> be used to manage priorities, such as priority and rank. But we don't use
>> the priority field consistently across the project. I've asked for teams to
>> document how they use Priority,
>> https://wiki.mozilla.org/Bugmasters/Projects/Folk_Knowledge/Priority_Field,
>> and you'll see how that varies.
>>
>> When I checked how the Priority field was used in Firefox-related
>> components, that distribution was:
>>
>> --- 460,362
>> P1   14,304
>> P2   15,971
>> P3   37,933
>> P44,204
>> P52,913
>>
>> The bulk of bugs in Firefox-related components are P3, most likely
>> because we have a bug filing form that defaults to P3 and that needs to be
>> fixed if it's still in use.
>>
>> Having to make what seemed like snap-decisions on bugs was also a point
>> of concern, but that's something the proposal had a work around for, using
>> needinfo? to defer a triage decision on a bug until enough questions were
>> answered. And since we made a commitment to make decisions on bugs, we need
>> back pressure on untriaged bugs.
>>
>> But from what I read, y'all are amenable to standardizing the priority
>> flag's use in Triage. Doing that would create a discontinuity in historical
>> data, but that's not an insurmountable problem, and we can document that
>> breakage for researchers using historical data.
>>
>> So next step is a second proposal, simplified, using Priority to
>> represent triage decisions.
>>
>> In addition, I'll want to remove several fields which are not useful, or
>> superfluous from the bug entry wizards. Priority is a field that should be
>> set by people triaging bugs, not entering them. We have a keyword
>> vocabulary which is more expressive than severity. And our bug entry forms
>> don't show the version affected, or the STR (steps to reproduce) flags
>> which means it's an extra edit to get the information relman needs into a
>> bug.
>>
>> Thank you again for your time and consideration as we make Bugzilla and
>> Firefox better for everyone.
>>
>> -- Emma Humphries
>>
>>
>> On Tue, Mar 29, 2016 at 1:07 PM, Emma Humphries  wrote:
>>
>>> tl;dr
>>>
>>> In Quarter Two I'm implementing the work we’ve been doing to improve
>>> triage, make actionable decisions on new bugs, and 

Re: Triage Plan for Firefox Components

2016-04-06 Thread Eric Rescorla
On Tue, Apr 5, 2016 at 9:27 PM, Emma Humphries  wrote:

> It's been a week since I asked for your comments on the plan for triage,
> thank you.
>
> I'm going reply to some general comments on the plan, and outline next
> steps.
>
> Ekt and others said that up to now, individual teams have owned how they
> triage and prioritized bugs. Mozilla has made commitments to how we are
> going to follow up with people filing bugs.
>

Where were those commitments made? Can you send a link?



The bulk of bugs in Firefox-related components are P3, most likely because
> we have a bug filing form that defaults to P3 and that needs to be fixed if
> it's still in use.
>
> Having to make what seemed like snap-decisions on bugs was also a point of
> concern, but that's something the proposal had a work around for, using
> needinfo? to defer a triage decision on a bug until enough questions were
> answered. And since we made a commitment to make decisions on bugs, we need
> back pressure on untriaged bugs.
>
> But from what I read, y'all are amenable to standardizing the priority
> flag's use in Triage
>

I don't think this is a question of which *flag* to use so much as whether
it's useful to produce a new flat taxonomy which is redundant with the
existing priority mechanisms that teams are using, which in many cases are
richer than a three-level (now, soon, no-plan) hierarchy as you propose.

I think the fundamental problem here is that you're trying to design
something that might be useful for defects but isn't useful for a large
fraction of bugs which are actually a method of documenting planned new
work. Bug Bugzilla needs to work for all of these.

-Ekr


In addition, I'll want to remove several fields which are not useful, or
> superfluous from the bug entry wizards. Priority is a field that should be
> set by people triaging bugs, not entering them. We have a keyword
> vocabulary which is more expressive than severity. And our bug entry forms
> don't show the version affected, or the STR (steps to reproduce) flags
> which means it's an extra edit to get the information relman needs into a
> bug.
>
> Thank you again for your time and consideration as we make Bugzilla and
> Firefox better for everyone.
>
> -- Emma Humphries
>
>
> On Tue, Mar 29, 2016 at 1:07 PM, Emma Humphries  wrote:
>
>> tl;dr
>>
>> In Quarter Two I'm implementing the work we’ve been doing to improve
>> triage, make actionable decisions on new bugs, and prevent us from shipping
>> regressions in Firefox.
>>
>> Today I’m asking for feedback on the plan which is posted at:
>>
>>
>> https://docs.google.com/document/d/1FFrtS0u6gNBE1mxsGJA9JLseJ_U6tW-1NJvHMq551ko
>>
>> Allowing bugs to sit around without a decision on what we will do about
>> them sends the wrong message to Mozillans about how we treat bugs, how we
>> value their involvement, and reduces quality.
>>
>> The Firefox quality team (myself, Mike Hoye, Ryan VanderMeulen, Mark
>> Cote, and Benjamin Smedberg) want to make better assertions about the
>> quality of our releases by giving you tools to make clear decisions about
>> which bugs must be fixed for each release (urgent) and actively tracking
>> those bugs.
>> What We Learned From The Pilot Program
>>
>> During the past 6 weeks, we have prototyped and tested a triage process
>> with the DOM, Hello, and Developer Tools teams.
>>
>> Andrew Overholt, who participated in the pilot for the DOM team, said, “A
>> consistent bug triage process can help us spread the load of watching
>> incoming bugs and help avoid issues falling through the cracks."
>>
>> During the pilot, the DOM team uncovered critical bugs quickly so that
>> people could be assigned to them.
>>
>> The pilot groups also found that the triage process needs to be fast and
>> have tooling to make going through bugs fast. It’s easy to fall behind on
>> triage for a component, but if you stay up to date it will take no more
>> than 15 minutes a day.
>>
>> You can find the bugs we triaged during the pilot by looking for
>> whiteboard tags containing ‘btpp-’.
>>
>> It is also important to have consistent, shared definitions for
>> regression across components so triagers do not waste effort on mis-labeled
>> bugs.
>> Comments?
>>
>> I am posting this plan now for comment over the next week. I intend to
>> finalize the triage plan for implementation by Tuesday, April 5th. Feedback
>> and questions are welcome on the document, privately via email or IRC
>> (where I’m emceeaich) or on the bugmast...@mozilla.org mailing list.
>> Timeline
>>
>> January: finish finding component responsible parties
>>
>> February: pilot review of NEW bugs with four groups of components, draft
>> new process
>>
>> Now: comment period for new process, finalize process
>>
>> Q2: implement new process across all components involved in shipping
>> Firefox
>> Q3: all newly triaged bugs following the new process
>>
>> -- Emma Humphries, Bugmaster
>>
>
>
> 

Re: Why is Mozreview hassling me about squashed commits?

2016-04-04 Thread Eric Rescorla
On Mon, Apr 4, 2016 at 10:26 PM, Mark Côté <mc...@mozilla.com> wrote:

> On 2016-04-04 8:41 PM, Eric Rescorla wrote:
> > On Mon, Apr 4, 2016 at 9:23 PM, Mark Côté <mc...@mozilla.com> wrote:
> >> To answer the original question, though, at this time we have no plans
> >> to completely do away with the squashed-commit view.  However, in the
> >> interests of ensuring that the commits that will land are the ones
> >> reviewed, we are thinking of making the squashed diff read-only--and
> >> also more obvious and easy to find, probably by not making it a proper
> >> review request, just a diff linked off the commit table (the layout of
> >> which we also have plans to improve).  I agree that seeing the entirety
> >> of a series of commits in a single diff can be useful.
> >
> >
> > What does read-only mean? If it's that you can't comment on it, then that
> > doesn't work.
>
> That is, after we have support for squash-on-push.


Ah, I understand.


At that point we
> wouldn't need squashed diffs for your workflow, only for micro commits
> where the reviewer wants to see a high-level view.  Or am I
> misunderstanding?
>

Yes, this seems fine.

-Ekr


>
> Mark
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   >