Thunderbird, the future, mozilla-central and comm-central

2015-11-30 Thread Mitchell Baker
This is a long-ish message. It covers general topics about Thunderbird 
and the future, and also the topics of the Foundation involvement (point 
9) and the question of merging repositories (point 11).   Naturally, I 
believe it’s worth the time to read through the end.


1. Firefox and Thunderbird have lived with competing demands for some 
time now. Today Thunderbird developers spend much of their time 
responding to changes made in core Mozilla systems and technologies. At 
the same time, build, Firefox, and platform engineers continue to pay a 
tax to support Thunderbird.


2. These competing demands are not good for either project. Engineers 
working on Thunderbird must focus on keeping up and adapting Firefox’s 
web-driven changes. Engineers working on Firefox and related projects 
end up considering the competing demands of Thunderbird, and/or 
wondering if and how much they should assist Thunderbird. Neither 
project can focus wholeheartedly on what is best for it.


3. These competing demands will not get better soon. Instead, they are 
very likely to get worse. Firefox and related projects are now speeding 
up the rate of change, modernizing our development process and our 
infrastructure. Indeed, this is required for Mozilla to have significant 
impact in the current computing environment.


4. There is a belief among some that living with these competing demands 
is good for the Mozilla project as a whole, because it gives us an 
additional focus, assists Thunderbird as a dedicated open source 
community, and also supports an open source standards based email 
client. This sentiment is appealing, and I share it to some extent. 
There is also a sense that caring for fellow open source developers is 
good, which I also share.  However, point 2 above — “Neither project can 
focus wholeheartedly on what is best for it” -- is the most important 
point. Having Thunderbird has an additional product and focus is *not* 
good overall if it causes all of our products — Firefox, other 
web-driven products and Thunderbird — to fall short of what we can 
accomplish.


5.  Many inside of Mozilla, including an overwhelming majority of our 
leadership, feel the need to be laser-focused on activities like Firefox 
that can have an industry-wide impact.With all due respect to 
Thunderbird and the Thunderbird community, we have been clear for years 
that we do not view Thunderbird as having this sort of potential.


6.  Given this, it’s clear to me that sooner or later paying a tax to 
support Thunderbird will not make sense as a policy for Mozilla.I 
know many believe this time came a while back, and I’ve been slow to say 
this clearly.  And of course, some feel that this time should never 
come.  However, as I say, it’s clear to me today that continuing to live 
with these competing demands given our focus on industry impact is 
increasingly unstable.  We’ve seen this already, in an unstructured way, 
as various groups inside Mozilla stop supporting Thunderbird.  The 
accelerating speed of Firefox and infrastructure changes -- which I 
welcome wholeheartedly -- will emphasize this.


7.  Some Mozillians are eager to see Mozilla support community-managed 
projects within our main development efforts.  I am also sympathetic to 
this view, with a key precondition.  Community-managed projects that 
make the main effort less nimble and likely to succeed don’t fit very 
well into this category for me.  They can still be great open source 
projects -- this is a separate question from whether the fit in our main 
development systems.  I feel so strongly about this because I am so 
concerned that “the Web” we  love is at risk.  If we want the traits of 
the Web to live and prosper in the world of mobile, social and data then 
we have to be laser-focused on this.


8.  Therefore I believe Thunderbird should would thrive best by 
separating itself from reliance on Mozilla development systems and in 
some cases, Mozilla technology. The current setting isn’t stable, and we 
should start actively looking into how we can transition in an orderly 
way to a future where Thunderbird and Firefox are un-coupled.   I don’t 
know what this will look like, or how it will work yet. I do know that 
it needs to happen, for both Firefox and Thunderbird’s sake.  This is a 
big job, and may require expertise that the Thunderbird team doesn’t yet 
have.Mozilla can provide various forms of assistance to the 
Thunderbird team via a set of the Mozilla Foundation’s capabilities.


9. Mark Surman of the Mozilla Foundation and I are both interested in 
helping find a way for Thunderbird to separate from Mozilla 
infrastructure. We also want to make sure that Thunderbird has the right 
kind of legal and financial home, one that will help the community 
thrive. Mark has been talking with the Thunderbird leadership about 
this, and has offered some of his time and focus and resources to 
assist. He will detail that offer in a separate 

Re: Why do we flush layout in nsDocumentViewer::LoadComplete?

2015-11-30 Thread Robert O'Callahan
On Tue, Dec 1, 2015 at 4:26 AM, Ehsan Akhgari 
wrote:

Should we finally bite the bullet and force a flush in reftests/crashtests?


You mean force a flush in the load event handler in the test harness,
before the test's load event handler runs?


> After thinking about this for a while, I haven't been able to come up with
> something specific that we'd risk breaking on the Web without our
> reftests/crashtests catching it...


How about a bug where we're missing a necessary FlushPendingNotifications,
and someone writes a reftest/crashtest load event handler that would
trigger the bug if not for the harness flush? That might not be worth
worrying about though.

I think we should just grind through reftest/crashtests and add an explicit
flush to every onload/load event handler. Mind numbing-work but would take
at most a week based on my count.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using the Taskcluster index to find builds

2015-11-30 Thread Gijs Kruitbosch
FWIW, I used this for "diditland" - 
http://www.gijsk.com/blog/2015/11/did-it-land/ and 
https://gijsk.github.io/diditland/ . It was a significant improvement 
over scraping archive.mozilla.org's HTML pages for a month, finding the 
right folder for a nightly, and then scraping that for the right json 
file and then reading that JSON file...


So thanks!

~ Gijs

On 30/11/2015 20:43, Chris AtLee wrote:

The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
work behind the scenes over the past few months to migrate the backend
storage for builds from the old "FTP" host to S3. While we've tried to make
this as seamless as possible, the new system is not a 100% drop-in
replacement for the old system, resulting in some confusion about where to
find certain types of builds.

At the same time, we've been working on publishing builds to the
Taskcluster Index [1]. This service provides a way to find a build given
various different attributes, such as its revision or date it was built.
Our plan is to make the index be the primary mechanism for discovering
build artifacts. As part of the ongoing buildbot to Taskcluster migration
project, builds happening on Taskcluster will no longer upload to
https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut off
platforms in buildbot, the index will be the only mechanism for discovering
new builds.

I posted to planet Mozilla last week [2] with some more examples and
details. Please explore the index, and ask questions about how to find what
you're looking for!

Cheers,
Chris

[1] http://docs.taskcluster.net/services/index/
[2] http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Tim Guan-tin Chien
The Gecko JavaScript is also littered with #ifdef and # is really not a
token for comment in JS... is there any plan to migrate that away since
there is ESLint present?

On Sun, Nov 29, 2015 at 10:37 PM, Vivien Nicolas 
wrote:

> On Sun, Nov 29, 2015 at 2:30 PM, David Bruant  wrote:
>
> > Hi,
> >
> > Just a drive-by comment to inform folks that there is an effort to
> > transition Mozilla JavaScript codebase to standard JavaScript.
> > Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617
> >
> > And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about
> > removing non-standard features from SpiderMonkey.
> > Of course this can rarely be done right away and most often requires
> > dependent bugs to move code to standard ECMAScript (with a period with
> > warnings about the usage of the non-standard feature).
> >
>
> What about .jsm modules ? Or is that not really considered ?
>
> I have been told that ES6 modules may help to solve some of the problems
> covered by .jsm but I don't see how you can create a ES6 module that is can
> be accessed from multiple js context from the same origin. Mostly
> interested as it would be nice to be able to write a module once and share
> it between multiple tabs instead of having to reload the same JS script for
> all similar tabs, like all the bugzilla tabs many of us have open for
> example.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Patrick Brosset
I don't how much work is involved with getting rid of non-standard
spidermonkey syntax and pre-processors, but if it's a lot, then one option
would be to fork the espree parser (used by eslint), make it support those,
and configure eslint to use our fork:
http://eslint.org/docs/user-guide/configuring.html#specifying-parser


On Mon, Nov 30, 2015 at 10:05 AM, Tim Guan-tin Chien 
wrote:

> The Gecko JavaScript is also littered with #ifdef and # is really not a
> token for comment in JS... is there any plan to migrate that away since
> there is ESLint present?
>
> On Sun, Nov 29, 2015 at 10:37 PM, Vivien Nicolas 
> wrote:
>
> > On Sun, Nov 29, 2015 at 2:30 PM, David Bruant 
> wrote:
> >
> > > Hi,
> > >
> > > Just a drive-by comment to inform folks that there is an effort to
> > > transition Mozilla JavaScript codebase to standard JavaScript.
> > > Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617
> > >
> > > And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about
> > > removing non-standard features from SpiderMonkey.
> > > Of course this can rarely be done right away and most often requires
> > > dependent bugs to move code to standard ECMAScript (with a period with
> > > warnings about the usage of the non-standard feature).
> > >
> >
> > What about .jsm modules ? Or is that not really considered ?
> >
> > I have been told that ES6 modules may help to solve some of the problems
> > covered by .jsm but I don't see how you can create a ES6 module that is
> can
> > be accessed from multiple js context from the same origin. Mostly
> > interested as it would be nice to be able to write a module once and
> share
> > it between multiple tabs instead of having to reload the same JS script
> for
> > all similar tabs, like all the bugzilla tabs many of us have open for
> > example.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Gijs Kruitbosch

On 29/11/2015 02:56, Dan Stillman wrote:

You can block known malware signatures with the scanner if you think
that's a good use of time. But that doesn't require blocking valid APIs
and patterns that have legitimate uses. That's what we're discussing
here. AV software doesn't result in long delays in legitimate software
updates so that AV vendors can manually review software.


It doesn't work the same way because AV vendors have no control over 
what apps the OS is letting run, but if it did, it would cause the same 
problems. Quick bugzilla search:


https://bugzilla.mozilla.org/show_bug.cgi?id=1116819
https://bugzilla.mozilla.org/show_bug.cgi?id=1168855
https://bugzilla.mozilla.org/show_bug.cgi?id=1095049
https://bugzilla.mozilla.org/show_bug.cgi?id=799980

I haven't looked at the bugs, but they are a small sample of a large set 
of bugs, and it's just a fact that we (just like other legitimate 
software developers) occasionally get flagged by various 
anti-virus/malware software.


~ Gijs
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Frederik Braun
On 30.11.2015 10:29, Patrick Brosset wrote:
> I don't how much work is involved with getting rid of non-standard
> spidermonkey syntax and pre-processors, but if it's a lot, then one option
> would be to fork the espree parser (used by eslint), make it support those,
> and configure eslint to use our fork:
> http://eslint.org/docs/user-guide/configuring.html#specifying-parser
> 

Or use babel, it supports a bigger portion of the bleeding edge, in my
experience :)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


APZ enabled on Fennec nightly

2015-11-30 Thread Kartikaya Gupta
Hi all,

Just a heads up that I landed the patch to enable APZ on Fennec
(nightly channel only for now). It should be in the Dec 1 nightly and
onwards. This will make scrolling around, and general touch input
handling, feel different on Fennec. The main improvement should be
that scrolling of iframes and overflow:scroll divs will be smoother
and faster.

If you find bugs, or behaviour differences that you feel make things
worse, please file them in the "Graphics, Panning and Zooming"
component of the "Firefox for Android" component and we'll take a
look.

Thanks!
kats
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Jonas Sicking
On Mon, Nov 30, 2015 at 7:38 AM, Henri Sivonen  wrote:
> Other browsers don't support this extension, so it clearly can't be a
> requirement for the Web Platform

Generally speaking, I don't think this reasoning is entirely accurate.
We know that there's lots of browser-specific code paths out there, so
just because other browsers don't support a given feature doesn't mean
that removing it from gecko won't affect our users.

Getting telemetry data is generally a better way to go.

That said, I know nothing about the specific encodings involved here,
so maybe there are other reasons to believe that this won't affect our
users.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Jörg Knobloch

On 30/11/2015 16:38, Henri Sivonen wrote:

Are there any objections to removing the ISO-2022-JP-2 functionality
from mozilla-central?


Hello,

I am currently in the process of repairing long-standing issues with CJK 
e-mail in general and Japanese e-mail using ISO-2022-JP in particularm 
see below.


While working on these bugs I learned that ISO-2022-JP is still widely 
used for e-mail in Japan, especially in more conservative circles like 
the banking sector.


As far as I see, there are no objections to removing the ISO-2022-JP-2 
variant as long as the ISO-2022-JP is maintained.


In Thunderbird we allow to send (encode) and view (decode) e-mail using 
ISO-2022-JP but not ISO-2022-JP-2.


Also note:
https://dxr.mozilla.org/mozilla-central/source/intl/uconv/nsTextToSubURI.cpp#163

Jorg K.

https://bugzilla.mozilla.org/show_bug.cgi?id=1225864 - M-C
https://bugzilla.mozilla.org/show_bug.cgi?id=1225904 - C-C
https://bugzilla.mozilla.org/show_bug.cgi?id=653342 -C-C



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why do we flush layout in nsDocumentViewer::LoadComplete?

2015-11-30 Thread Ehsan Akhgari

On 2015-11-28 1:00 AM, Boris Zbarsky wrote:

On 11/27/15 2:15 AM, Axel Hecht wrote:

I wonder, how much of the web could rely on this, given our tests do?


Our test failures were mostly along the lines of "we expected an
assertion here and now we don't get one".  I doubt that would much
affect the web.


Should we finally bite the bullet and force a flush in 
reftests/crashtests?  After thinking about this for a while, I haven't 
been able to come up with something specific that we'd risk breaking on 
the Web without our reftests/crashtests catching it...


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Better mach support for Firefox for Android

2015-11-30 Thread Geoffrey Brown
In recent months, many improvements have been made to mach commands to
support running, testing, and debugging Firefox for Android:

 - More test commands for Android. These mach test commands now support
Firefox for Android explicitly:

  mach mochitest
  mach robocop
  mach reftest
  mach crashtest
  mach jstestbrowser
  mach xpcshell-test
  mach cppunittest


 - Emulator support. 'mach android-emulator' launches the Android emulator,
using the same Android image used to run tests seen on treeherder; select
an image type with the --version option.

 - All of the test, run, and debug commands offer to start the Android
emulator if no Android device is connected (when run from an Android
context).

  $ ./mach mochitest testing/mochitest/tests/Harness_sanity
  No Android devices connected. Start an emulator? (Y/n)

 - All test, run, and debug commands offer to install Firefox on the
connected device or emulator if Firefox is not already installed.

  $ ./mach mochitest testing/mochitest/tests/Harness_sanity
  It looks like Firefox is not installed on this device.
  Install Firefox? (Y/n)

 - Test commands requiring host xpcshell offer to install "host utilities"
if none have been configured.

 - Firefox can be run on an Android device or emulator with 'mach run'.

 - JimDB, a GDB fork explicitly supporting debugging for Firefox for
Android, can be installed, configured, and run with 'mach run --debug'.

 - Emulator images, host utilities, and jimdb are automatically downloaded,
cached, and installed as needed.

 - Firefox for Android wiki pages have been updated:
- Build info at https://wiki.mozilla.org/Mobile/Fennec/Android
- Testing info at https://wiki.mozilla.org/Mobile/Fennec/Android/Testing
- Debugging with GDB at
https://wiki.mozilla.org/Mobile/Fennec/Android/GDB.

 - Screencasts demonstrate common tasks at
https://people.mozilla.org/~gbrown/android-demos/.

Running, testing, and debugging Firefox will always be more complicated on
Android than on desktop, but now these tasks look just as easy on Android,
and can be performed with the same mach commands as on desktop.

If you have had trouble in the past running, testing, or debugging your own
Firefox for Android build, this is a great time to try again. All you need
to get started is a Firefox for Android build on a Linux or OSX computer.
Something not working for you? Have more ideas for improvements? Let me
know.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Gavin Sharp
That's one of the suggestions Dan Stillman makes in his post, and it
seems like a fine idea to me.

Gavin

On Mon, Nov 30, 2015 at 11:15 AM, Jonathan Kew  wrote:
> On 30/11/15 15:45, Gavin Sharp wrote:
>>>
>>> and it's definitely the wrong thing to do.
>>
>>
>> Fundamentally the add-on signing system was designed with an important
>> trade-off in mind: security (ensuring no malicious add-ons are
>> installed/executed) vs. maintaining a healthy add-on ecosystem (ensuring
>> that building and distributing add-ons is as easy as it can be).
>>
>> If your proposed alternative plan is "get rid of automatic signing", then
>> we know that it's going to significantly hamper Mozilla's ability to
>> maintain a healthy add-on ecosystem, and harm what were considered some
>> important add-on use cases. I don't think it strikes the right balance.
>>
>> If your proposed alternative plan is something else, maybe it would help
>> to
>> clarify it.
>>
>
> Perhaps if there were a mechanism whereby "trusted" add-on developers could
> have their add-ons -- or even just updates for
> previously-reviewed-and-signed add-ons -- automatically signed without
> having to jump through the validator/review hoops each time?
>
> How would a developer acquire "trusted" status? By demonstrating a track
> record of producing add-ons that pass AMO review -- which may be a
> combination of automatic validation and/or human review.
>
> And of course any add-on developer who is found to have abused their
> "trusted" status to sign and deploy malicious code would have that status
> revoked, in addition to the malicious add-on being blocked.
>
> ISTM this would maintain most of the intended benefits of the signing
> system, while substantially smoothing the path for developers such as Dan
> who need to deliver frequent updates to their users.
>
> Feasible?
>
> JK
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Thomas Zimmermann
Hi

Am 27.11.2015 um 16:50 schrieb Gavin Sharp:
> On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham  wrote:
>> But the thing is, members of our security group are now piling into the
>> bug pointing out that trying to find malicious JS code by static code
>> review is literally _impossible_ (and perhaps hinting that they'd have
>> said so much earlier if someone had asked them).
> No, that's not right. There's an important distinction between
> "finding malicious JS code" and "finding _all_ malicious JS code". The
> latter is impossible, but the former isn't.
>
> Proving "the validator won't catch everything" isn't particularly
> relevant when it isn't intended to, in the overall add-on signing
> system design.

I think the fact that the validator (or manual review) cannot catch
everything is very relevant.

Users cannot rely on the review process (automatic or manual), because
it can never catch all bugs (malicious or not). So users still have to
rely on an extension's developers to get their code into good shape;
just as it is currently the case. And I'd guess that malicious code will
get more sophisticated when the review procedures improve.

Another point is that one never knows how close to 'good' an extension
or a review is, because this would require knowledge about the absolute
number of bugs in the extension. Getting this number requires a perfect
validator. So all bugs from a review might get fixed, but the overall
extension is still in the 'crap territory'. I'm a bit surprised that
this hasn't been mentioned here yet.

Therefore I'm skeptical about the effective benefit for the users. The
mandatory review seems to create a promise of security that it cannot
fulfill. Reviews and validation are good things, but holding back an
update for a pending review doesn't seem helpful.

Best regards
Thomas

>
> Gavin
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Gavin Sharp
It looks to me like you're arguing about a separate point (AMO review
requirements for add-on updates), when the subject at hand is the add-on
signing system's reliance on the AMO validator as the only prerequisite for
automatic signing.

Gavin

On Mon, Nov 30, 2015 at 10:30 AM, Thomas Zimmermann  wrote:

> Hi
>
> Am 27.11.2015 um 16:50 schrieb Gavin Sharp:
> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham 
> wrote:
> >> But the thing is, members of our security group are now piling into the
> >> bug pointing out that trying to find malicious JS code by static code
> >> review is literally _impossible_ (and perhaps hinting that they'd have
> >> said so much earlier if someone had asked them).
> > No, that's not right. There's an important distinction between
> > "finding malicious JS code" and "finding _all_ malicious JS code". The
> > latter is impossible, but the former isn't.
> >
> > Proving "the validator won't catch everything" isn't particularly
> > relevant when it isn't intended to, in the overall add-on signing
> > system design.
>
> I think the fact that the validator (or manual review) cannot catch
> everything is very relevant.
>
> Users cannot rely on the review process (automatic or manual), because
> it can never catch all bugs (malicious or not). So users still have to
> rely on an extension's developers to get their code into good shape;
> just as it is currently the case. And I'd guess that malicious code will
> get more sophisticated when the review procedures improve.
>
> Another point is that one never knows how close to 'good' an extension
> or a review is, because this would require knowledge about the absolute
> number of bugs in the extension. Getting this number requires a perfect
> validator. So all bugs from a review might get fixed, but the overall
> extension is still in the 'crap territory'. I'm a bit surprised that
> this hasn't been mentioned here yet.
>
> Therefore I'm skeptical about the effective benefit for the users. The
> mandatory review seems to create a promise of security that it cannot
> fulfill. Reviews and validation are good things, but holding back an
> update for a pending review doesn't seem helpful.
>
> Best regards
> Thomas
>
> >
> > Gavin
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Gavin Sharp
> and it's definitely the wrong thing to do.

Fundamentally the add-on signing system was designed with an important
trade-off in mind: security (ensuring no malicious add-ons are
installed/executed) vs. maintaining a healthy add-on ecosystem (ensuring
that building and distributing add-ons is as easy as it can be).

If your proposed alternative plan is "get rid of automatic signing", then
we know that it's going to significantly hamper Mozilla's ability to
maintain a healthy add-on ecosystem, and harm what were considered some
important add-on use cases. I don't think it strikes the right balance.

If your proposed alternative plan is something else, maybe it would help to
clarify it.

Gavin

On Mon, Nov 30, 2015 at 9:33 AM, Ehsan Akhgari 
wrote:

> On 2015-11-28 2:06 AM, Gavin Sharp wrote:
>
>> The assumption that the validator must catch all malicious code for
>> add-on signing to be beneficial is incorrect, and seems to be what's
>> fueling most of this thread.
>>
>
> It would be really helpful if we can get past defending the add-on
> validator; the only thing that everyone is this thread seems to agree on is
> the list of things it is capable of doing.
>
> The problem is how we're using it seems to not make sense according to
> what it can and does do.
>
> > Validation being a prerequisite for
>
>> automatic signing is not primarily a security measure, but rather just a
>> way of eliminating "obvious" problems (security-related or otherwise)
>> from installed and enabled add-ons generally.
>>
>
> Successful validation is currently not merely a prerequisite for automatic
> signing of non-AMO add-ons, it is also a sufficient condition.  Let me
> repeat the part of my previous response which you didn't reply to:
>
> "The specific problem here is that we allow automatic signing of
> extensions once they pass the add-on validator checks, and we allow our
> users to run signed extensions without any other checks.  Therefore, the
> current system is vulnerable to attacks such as what Dan's PoC extension
> has demonstrated."
>
> Perhaps that is not what was supposed to happen, but we're doing this for
> a fact, and it's definitely the wrong thing to do.
>
> > With add-on singing fully
>
>> implemented, if (when) malicious add-ons get automatically signed,
>> you'll have several more effective tools to deal with them, compared to
>> the status quo.
>>
>> Gavin
>>
>> On Nov 27, 2015, at 8:49 PM, Eric Rescorla > > wrote:
>>
>>
>>>
>>> On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari
>>> > wrote:
>>>
>>> On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp
>>> > wrote:
>>>
>>>
>>> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham >> > wrote:
>>> > > But the thing is, members of our security group are now piling
>>> into the
>>> > > bug pointing out that trying to find malicious JS code by static
>>> code
>>> > > review is literally _impossible_ (and perhaps hinting that
>>> they'd have
>>> > > said so much earlier if someone had asked them).
>>> >
>>> > No, that's not right. There's an important distinction between
>>> > "finding malicious JS code" and "finding _all_ malicious JS code".
>>> The
>>> > latter is impossible, but the former isn't.
>>> >
>>>
>>> Note that malicious code here might look like this:
>>>
>>>   console.log("success");
>>>
>>> It's impossible to tell by looking at the code whether that line
>>> prints a
>>> success message on the console, or something entirely different,
>>> such as
>>> running calc.exe.
>>>
>>> A better framing for the problem is "finding some arbitrary
>>> instances of
>>> malicious JS code" vs "finding malicious JS code".  My point in
>>> the bug and
>>> in the discussions prior to that was that a static checker can
>>> only do the
>>> former, and as such, if the goal of the validator is finding
>>> malicious
>>> code, its effectiveness is bound to be a lint tool at best.
>>>
>>>
>>> Indeed.  And if the validator is publicly accessible, let alone has
>>> public
>>> source code, it's likely to be straightforward for authors of malicious
>>> code to evade the validator. All they need to do is run their code
>>> through the validator, see what errors it spits out, and modify the
>>> code until it no longer spits out errors.
>>>
>>> Again, this goes back to threat model. If we're trying to make it easier
>>> for authors to comply with our policies (and avoid writing problematic
>>> add-ons), then a validator seems reasonable. However, if we're trying
>>> to prevent authors of malicious add-ons from getting their add-ons
>>> through, that seems much more questionable, for the reasons listed above.
>>> However, once we accept that we can't stop authors who are 

Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Jonathan Kew

On 30/11/15 15:45, Gavin Sharp wrote:

and it's definitely the wrong thing to do.


Fundamentally the add-on signing system was designed with an important
trade-off in mind: security (ensuring no malicious add-ons are
installed/executed) vs. maintaining a healthy add-on ecosystem (ensuring
that building and distributing add-ons is as easy as it can be).

If your proposed alternative plan is "get rid of automatic signing", then
we know that it's going to significantly hamper Mozilla's ability to
maintain a healthy add-on ecosystem, and harm what were considered some
important add-on use cases. I don't think it strikes the right balance.

If your proposed alternative plan is something else, maybe it would help to
clarify it.



Perhaps if there were a mechanism whereby "trusted" add-on developers 
could have their add-ons -- or even just updates for 
previously-reviewed-and-signed add-ons -- automatically signed without 
having to jump through the validator/review hoops each time?


How would a developer acquire "trusted" status? By demonstrating a track 
record of producing add-ons that pass AMO review -- which may be a 
combination of automatic validation and/or human review.


And of course any add-on developer who is found to have abused their 
"trusted" status to sign and deploy malicious code would have that 
status revoked, in addition to the malicious add-on being blocked.


ISTM this would maintain most of the intended benefits of the signing 
system, while substantially smoothing the path for developers such as 
Dan who need to deliver frequent updates to their users.


Feasible?

JK

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread David Rajchenbach-Teller
Could we perhaps organize a MozLando workshop to discuss add-ons security?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Henri Sivonen
Japanese *email* is often encoded as ISO-2022-JP, and Web browsers
also support ISO-2022-JP even though Shift_JIS and EUC-JP are the more
common Japanese legacy encodings on the *Web*. The two UTF-16 variants
and ISO-2022-JP are the only remaining encodings in the Web Platform
that encode non-Basic Latin characters to bytes that represent Basic
Latin characters in ASCII.

There exists an extension of ISO-2022-JP called ISO-2022-JP-2. The
ISO-2022-JP decoder (not encoder) in Gecko supports ISO-2022-JP-2
features, which include the use of characters from JIS X 0212, KS X
1001 (better known as the repertoire for EUC-KR), GB 2312, ISO-8859-1
and ISO-8859-7. The reason originally given for adding ISO-2022-JP-2
support to Gecko was: "I want to add a ISO-2022-JP-2 charset decoder
to Mozilla."[1]

Other browsers don't support this extension, so it clearly can't be a
requirement for the Web Platform, and the Encoding Standard doesn't
include the ISO-2022-JP-2 extension in its definition for the
ISO-2022-JP decoder. Bringing our ISO-2022-JP decoder to compliance[2]
would, therefore, involve removing ISO-2022-JP-2 support.

The only known realistic source of ISO-2022-JP-2 data is Apple's Mail
application under some circumstances, which may impact Thunderbird and
SeaMonkey.

Are there any objections to removing the ISO-2022-JP-2 functionality
from mozilla-central?

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=72468
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=715833
-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Thomas Zimmermann
Hi

Am 30.11.2015 um 16:40 schrieb Gavin Sharp:
> It looks to me like you're arguing about a separate point (AMO review
> requirements for add-on updates), when the subject at hand is the add-on
> signing system's reliance on the AMO validator as the only prerequisite for
> automatic signing.

OK. Or maybe I used the term 'update' a bit sloppy. My question is: is
it worth holding back an extension because of of a pending review
(either by a tool or human)? I guess updating existing add-ons is the
more common case, compared to signing new one's.

Your reply makes me think that the whole discussion implicitly seems to
assume that a manual review can fix any problems with the automated
tools, or is always better. I would not agree to this. Manual reviews
depend a lot on the reviewer and the reviewer's constitution during the
review. With tools, at least you know what you get.

Best regards
Thomas

>
> Gavin
>
> On Mon, Nov 30, 2015 at 10:30 AM, Thomas Zimmermann > wrote:
>> Hi
>>
>> Am 27.11.2015 um 16:50 schrieb Gavin Sharp:
>>> On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham 
>> wrote:
 But the thing is, members of our security group are now piling into the
 bug pointing out that trying to find malicious JS code by static code
 review is literally _impossible_ (and perhaps hinting that they'd have
 said so much earlier if someone had asked them).
>>> No, that's not right. There's an important distinction between
>>> "finding malicious JS code" and "finding _all_ malicious JS code". The
>>> latter is impossible, but the former isn't.
>>>
>>> Proving "the validator won't catch everything" isn't particularly
>>> relevant when it isn't intended to, in the overall add-on signing
>>> system design.
>> I think the fact that the validator (or manual review) cannot catch
>> everything is very relevant.
>>
>> Users cannot rely on the review process (automatic or manual), because
>> it can never catch all bugs (malicious or not). So users still have to
>> rely on an extension's developers to get their code into good shape;
>> just as it is currently the case. And I'd guess that malicious code will
>> get more sophisticated when the review procedures improve.
>>
>> Another point is that one never knows how close to 'good' an extension
>> or a review is, because this would require knowledge about the absolute
>> number of bugs in the extension. Getting this number requires a perfect
>> validator. So all bugs from a review might get fixed, but the overall
>> extension is still in the 'crap territory'. I'm a bit surprised that
>> this hasn't been mentioned here yet.
>>
>> Therefore I'm skeptical about the effective benefit for the users. The
>> mandatory review seems to create a promise of security that it cannot
>> fulfill. Reviews and validation are good things, but holding back an
>> update for a pending review doesn't seem helpful.
>>
>> Best regards
>> Thomas
>>
>>> Gavin
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Contributor wishes to help with writing a web bot/crawler (was Fwd: [dev-mdc] Greetings!)

2015-11-30 Thread Chris Mills
Hi there,

Apologies for the multi-list posting, but I’m really not sure where best to 
send this. If you are interested in talking to Mudit, please trim down the 
reply-list before posting!

So, Mudit (cc’d) wishes to contribute to Mozilla by helping to write a web 
bot/crawler to help[ the project in some way. Does anyone have any good first 
bugs/projects he could help with?

Best regards,

Chris Mills
 Senior tech writer || Mozilla
developer.mozilla.org  || MDN
 cmi...@mozilla.com  || @chrisdavidmills

> Begin forwarded message:
> 
> From: Mudit Sharma 
> Date: 30 November 2015 at 12:41:23 GMT
> To: Chris Mills 
> Subject: Re: [dev-mdc] Greetings!
> 
> Hi Chris,
> I was interested in building a web-crawler/bot not just specifically writing 
> robots.txt files for websites.
> Here are a few github profiles which specify web-bots at Mozilla:
> https://github.com/rtucker-mozilla/mozilla-nagios-bot 
> 
> https://github.com/mozilla/testdaybot 
> I have a thought that a crawler for Mozilla can be beneficial,since indexing 
> web pages just as Google,can
> help the browser in some way..
> I can collaborate with some Mozillian who knows web-bot building or has some 
> past experience in the field.
> What do you think?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Andrew Sutherland
On Mon, Nov 30, 2015, at 01:24 PM, Adam Roach wrote:
> Does this mean it might interact with webmail services as well? Or do 
> they tend to do server-side transcoding from the received encoding to 
> something like UTF8?

They do server-side decoding.  It would take a tremendous amount of
effort to try and expose the underlying character set directly to the
browser given that the MIME part also has transport-encoding occurring
(base64 or quoted-printable), may have higher level things like
format=flowed going on, and may need multipart/related cid-protocol
transforms going on.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Better mach support for Firefox for Android

2015-11-30 Thread Kartikaya Gupta
Thanks for all your work in making this happen! I've used some of
these commands recently and they work much better than they used to
for Fennec :)

On Mon, Nov 30, 2015 at 12:32 PM, Geoffrey Brown  wrote:
> In recent months, many improvements have been made to mach commands to
> support running, testing, and debugging Firefox for Android:
>
>  - More test commands for Android. These mach test commands now support
> Firefox for Android explicitly:
>
>   mach mochitest
>   mach robocop
>   mach reftest
>   mach crashtest
>   mach jstestbrowser
>   mach xpcshell-test
>   mach cppunittest
>
>
>  - Emulator support. 'mach android-emulator' launches the Android emulator,
> using the same Android image used to run tests seen on treeherder; select
> an image type with the --version option.
>
>  - All of the test, run, and debug commands offer to start the Android
> emulator if no Android device is connected (when run from an Android
> context).
>
>   $ ./mach mochitest testing/mochitest/tests/Harness_sanity
>   No Android devices connected. Start an emulator? (Y/n)
>
>  - All test, run, and debug commands offer to install Firefox on the
> connected device or emulator if Firefox is not already installed.
>
>   $ ./mach mochitest testing/mochitest/tests/Harness_sanity
>   It looks like Firefox is not installed on this device.
>   Install Firefox? (Y/n)
>
>  - Test commands requiring host xpcshell offer to install "host utilities"
> if none have been configured.
>
>  - Firefox can be run on an Android device or emulator with 'mach run'.
>
>  - JimDB, a GDB fork explicitly supporting debugging for Firefox for
> Android, can be installed, configured, and run with 'mach run --debug'.
>
>  - Emulator images, host utilities, and jimdb are automatically downloaded,
> cached, and installed as needed.
>
>  - Firefox for Android wiki pages have been updated:
> - Build info at https://wiki.mozilla.org/Mobile/Fennec/Android
> - Testing info at https://wiki.mozilla.org/Mobile/Fennec/Android/Testing
> - Debugging with GDB at
> https://wiki.mozilla.org/Mobile/Fennec/Android/GDB.
>
>  - Screencasts demonstrate common tasks at
> https://people.mozilla.org/~gbrown/android-demos/.
>
> Running, testing, and debugging Firefox will always be more complicated on
> Android than on desktop, but now these tasks look just as easy on Android,
> and can be performed with the same mach commands as on desktop.
>
> If you have had trouble in the past running, testing, or debugging your own
> Firefox for Android build, this is a great time to try again. All you need
> to get started is a Firefox for Android build on a Linux or OSX computer.
> Something not working for you? Have more ideas for improvements? Let me
> know.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Adam Roach

On 11/30/15 09:38, Henri Sivonen wrote:

The only known realistic source of ISO-2022-JP-2 data is Apple's Mail
application under some circumstances, which may impact Thunderbird and
SeaMonkey.


Does this mean it might interact with webmail services as well? Or do 
they tend to do server-side transcoding from the received encoding to 
something like UTF8?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Fabrice Desré
The plan in general for chrome JS is to switch from #ifdefs to having
things defined in AppConstants.jsm

Fabrice

On 11/30/2015 01:05 AM, Tim Guan-tin Chien wrote:
> The Gecko JavaScript is also littered with #ifdef and # is really not a
> token for comment in JS... is there any plan to migrate that away since
> there is ESLint present?
> 
> On Sun, Nov 29, 2015 at 10:37 PM, Vivien Nicolas  > wrote:
> 
> On Sun, Nov 29, 2015 at 2:30 PM, David Bruant  > wrote:
> 
> > Hi,
> >
> > Just a drive-by comment to inform folks that there is an effort to
> > transition Mozilla JavaScript codebase to standard JavaScript.
> > Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617
> >
> > And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about
> > removing non-standard features from SpiderMonkey.
> > Of course this can rarely be done right away and most often requires
> > dependent bugs to move code to standard ECMAScript (with a period with
> > warnings about the usage of the non-standard feature).
> >
> 
> What about .jsm modules ? Or is that not really considered ?
> 
> I have been told that ES6 modules may help to solve some of the problems
> covered by .jsm but I don't see how you can create a ES6 module that
> is can
> be accessed from multiple js context from the same origin. Mostly
> interested as it would be nice to be able to write a module once and
> share
> it between multiple tabs instead of having to reload the same JS
> script for
> all similar tabs, like all the bugzilla tabs many of us have open for
> example.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org 
> https://lists.mozilla.org/listinfo/dev-platform
> 
> 
> 
> 
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
> 


-- 
Fabrice Desré
b2g team
Mozilla Corporation
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Bobby Holley
(Gingerly wading into this thread and hoping not to get sucked in)

Given the fundamental limits of static analysis, dynamic analysis might be
a better approach. I think we can do a reasonable job (with the help of
interpositions) of monitoring the various escape points at which addon code
might do arbitrary dangerous things, without actually preventing it from
doing those things in a way that would break lots of addons. We could then
keep an eye on what addons are doing in the wild, and revoke the signatures
for the addon / developer if we find them to be misbehaving.

I proposed this in [1] and it got filed separately as [2]. Detailed
follow-up discussion is probably better to do in that bug.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1199628#c26
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1227464


On Mon, Nov 30, 2015 at 8:25 AM, Gavin Sharp  wrote:

> That's one of the suggestions Dan Stillman makes in his post, and it
> seems like a fine idea to me.
>
> Gavin
>
> On Mon, Nov 30, 2015 at 11:15 AM, Jonathan Kew  wrote:
> > On 30/11/15 15:45, Gavin Sharp wrote:
> >>>
> >>> and it's definitely the wrong thing to do.
> >>
> >>
> >> Fundamentally the add-on signing system was designed with an important
> >> trade-off in mind: security (ensuring no malicious add-ons are
> >> installed/executed) vs. maintaining a healthy add-on ecosystem (ensuring
> >> that building and distributing add-ons is as easy as it can be).
> >>
> >> If your proposed alternative plan is "get rid of automatic signing",
> then
> >> we know that it's going to significantly hamper Mozilla's ability to
> >> maintain a healthy add-on ecosystem, and harm what were considered some
> >> important add-on use cases. I don't think it strikes the right balance.
> >>
> >> If your proposed alternative plan is something else, maybe it would help
> >> to
> >> clarify it.
> >>
> >
> > Perhaps if there were a mechanism whereby "trusted" add-on developers
> could
> > have their add-ons -- or even just updates for
> > previously-reviewed-and-signed add-ons -- automatically signed without
> > having to jump through the validator/review hoops each time?
> >
> > How would a developer acquire "trusted" status? By demonstrating a track
> > record of producing add-ons that pass AMO review -- which may be a
> > combination of automatic validation and/or human review.
> >
> > And of course any add-on developer who is found to have abused their
> > "trusted" status to sign and deploy malicious code would have that status
> > revoked, in addition to the malicious add-on being blocked.
> >
> > ISTM this would maintain most of the intended benefits of the signing
> > system, while substantially smoothing the path for developers such as Dan
> > who need to deliver frequent updates to their users.
> >
> > Feasible?
> >
> > JK
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Ehsan Akhgari

On 2015-11-30 10:29 AM, David Rajchenbach-Teller wrote:

Could we perhaps organize a MozLando workshop to discuss add-ons security?


I think you need to reach out to the add-ons team.  I was not involved 
in any of the design process; I just happened to note the same issues as 
Dan noticed after the fact.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread emiliano . heyns
On Monday, November 30, 2015 at 8:57:42 PM UTC+1, Dan Stillman wrote:
> On 11/30/15 6:24 AM, Gijs Kruitbosch wrote:
> > This seems like something we should be able to get data about. (I do 
> > not have such data.) Have you asked anyone?
> 
> If it's only Zotero that's affected by this, then we should have been 
> whitelisted three months ago when we first asked about it.

And the number is skewed in any case, as there will be extensions (such as 
mine) who neuter functionality in order to avoid having to go through manual 
review. So I would be put in the "not affected" pile even though I *am* 
effected.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Dan Stillman
Just to give some context here, we've been asking for a "trusted author" 
whitelist for three months. Gijs even helpfully proposed specific rules. 
The reason things came to this point is that it was still being argued 
as of last week that the whitelist was inherently more dangerous by 
allowing whitelisted developers to do malicious things and evade 
detection. We can now see that's not true — non-whitelisted extensions 
can do the same thing, trivially.


We've assumed that Zotero would be whitelisted, but that doesn't help 
other legitimate extension developers if the whitelist is designed with 
an assumption that a whitelisted extension is more dangerous. The 
proposed rules were still going to restrict whitelist status to 
extensions with large numbers of users.


Given what we know now, I can't see a justification for not 
"whitelisting" (meaning allowing an automated review override) any 
demonstrably legitimate developer. In terms of malware, you'd have to 
argue that someone who bought, compromised, or developed a legitimate 
extension would then be unable or unwilling to rewrite one line to get 
code past the validator. In terms of potentially insecure code (e.g., 
innerHTML), you'd have to argue that those patterns posed sufficient 
risk to users over the next several days that blocking an extension 
update (which possibly contained other important bug fixes that 
definitely affected users) made sense, as opposed to AMO reviewers 
looking at them after the fact and asking for immediate fixes or 
blocklisting depending on severity (and rescinding "whitelist" 
privileges if there's a pattern of ignoring issues).


I've gone further and argued that, given the ease of a validator bypass, 
just doing an initial manual review for the first release of any 
front-loaded unlisted extension would be the meaningful blocking step, 
but that's not as important. I just don't want to see obviously 
legitimate developers of existing extensions blocked for no reason.


On 11/30/15 2:52 PM, Ehsan Akhgari wrote:

That sounds like a good idea to me as well.

On 2015-11-30 11:25 AM, Gavin Sharp wrote:

That's one of the suggestions Dan Stillman makes in his post, and it
seems like a fine idea to me.

Gavin

On Mon, Nov 30, 2015 at 11:15 AM, Jonathan Kew  
wrote:

On 30/11/15 15:45, Gavin Sharp wrote:


and it's definitely the wrong thing to do.



Fundamentally the add-on signing system was designed with an important
trade-off in mind: security (ensuring no malicious add-ons are
installed/executed) vs. maintaining a healthy add-on ecosystem 
(ensuring

that building and distributing add-ons is as easy as it can be).

If your proposed alternative plan is "get rid of automatic 
signing", then

we know that it's going to significantly hamper Mozilla's ability to
maintain a healthy add-on ecosystem, and harm what were considered 
some
important add-on use cases. I don't think it strikes the right 
balance.


If your proposed alternative plan is something else, maybe it would 
help

to
clarify it.



Perhaps if there were a mechanism whereby "trusted" add-on 
developers could

have their add-ons -- or even just updates for
previously-reviewed-and-signed add-ons -- automatically signed without
having to jump through the validator/review hoops each time?

How would a developer acquire "trusted" status? By demonstrating a 
track

record of producing add-ons that pass AMO review -- which may be a
combination of automatic validation and/or human review.

And of course any add-on developer who is found to have abused their
"trusted" status to sign and deploy malicious code would have that 
status

revoked, in addition to the malicious add-on being blocked.

ISTM this would maintain most of the intended benefits of the signing
system, while substantially smoothing the path for developers such 
as Dan

who need to deliver frequent updates to their users.

Feasible?

JK


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Joshua Cranmer 

On 11/30/2015 1:02 PM, Andrew Sutherland wrote:

On Mon, Nov 30, 2015, at 01:24 PM, Adam Roach wrote:

Does this mean it might interact with webmail services as well? Or do
they tend to do server-side transcoding from the received encoding to
something like UTF8?

They do server-side decoding.  It would take a tremendous amount of
effort to try and expose the underlying character set directly to the
browser given that the MIME part also has transport-encoding occurring
(base64 or quoted-printable), may have higher level things like
format=flowed going on, and may need multipart/related cid-protocol
transforms going on.


Additionally, declared mail charsets are sufficiently often a lie that 
it is much easier to control the decoding process by converting to UTF-8 
server-side, which also evades inconsistencies in browser decoding of 
charsets.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Using the Taskcluster index to find builds

2015-11-30 Thread Chris AtLee
The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
work behind the scenes over the past few months to migrate the backend
storage for builds from the old "FTP" host to S3. While we've tried to make
this as seamless as possible, the new system is not a 100% drop-in
replacement for the old system, resulting in some confusion about where to
find certain types of builds.

At the same time, we've been working on publishing builds to the
Taskcluster Index [1]. This service provides a way to find a build given
various different attributes, such as its revision or date it was built.
Our plan is to make the index be the primary mechanism for discovering
build artifacts. As part of the ongoing buildbot to Taskcluster migration
project, builds happening on Taskcluster will no longer upload to
https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut off
platforms in buildbot, the index will be the only mechanism for discovering
new builds.

I posted to planet Mozilla last week [2] with some more examples and
details. Please explore the index, and ask questions about how to find what
you're looking for!

Cheers,
Chris

[1] http://docs.taskcluster.net/services/index/
[2] http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: APZ enabled on Fennec nightly

2015-11-30 Thread Robert O'Callahan
Fantastic!!!

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Web APIs documentation/evangelism meeting Thursday at 8 AM PST

2015-11-30 Thread Eric Shepherd
The Web API documentation community meeting, with representatives from
the technical evangelism and the API development teams, will take place
on Thursday at 8 AM Pacific Time (see http://bit.ly/1GghwBR for your
time zone).

Typical meetings include news about recent API development progress and
future development plans, discussions about what the priorities for
documenting and promoting new Web technologies should be, and the status
of ongoing work to document and evangelize these technologies.

We have an agenda, as well as details on how to join, here:

https://public.etherpad-mozilla.org/p/API-docs-meeting-2015-12-03.

If you have topics you wish to discuss, please feel free to add them to
the agenda.

We look forward to seeing you there!

If you have topics you wish to discuss, please feel free to add them to
the agenda. Also, if you're unable to attend but have information or
suggestions related to APIs on the Web, their documentation, and how we
promote these APIs, please add a note or item to the agenda so we can be
sure to address it, even if you're unable to attend.

-- 

Eric Shepherd
Senior Technical Writer
Mozilla 
Blog: http://www.bitstampede.com/
Twitter: http://twitter.com/sheppy
Check my Availability 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Thunderbird, the future, mozilla-central and comm-central

2015-11-30 Thread Mark Surman


Hi all

As a follow on to Mitchell’s post, I want to outline more specifically 
how the Foundation got involved and the ways in which I believe the 
Foundation can assist in this situation.


Mitchell and I have had a number of discussions regarding Thunderbird. 
The Thunderbird Council has also come to each of us at various times. We 
agree it could be helpful for some of the Foundation's capabilities to 
be part of this work. Specifically, I’ve put forward an offer of 
Foundation staff time and resources to:


1. Advise and support the Council as they come up with a plan. Mitchell, 
myself and many at the Foundation care about the long term health of 
Thunderbird and feel some responsibility to help get it to a good spot.


2. Beyond time, we’ve offered the Council a modest amount of money to 
pay for contractors who can help develop options for both the 
organizational and technical future of Thunderbird.


2.1 As Mitchell said, this *does not* mean that MoFo is making technical 
decisions about Thunderbird -- just that we want to make sure the 
Council has access a technical architect, a business planner, etc. to 
generate plans and options that the community can consider together.


2.2 As part of this, we’ve also (loosely) offered MoFo's meeting 
facilitation team run by Allen Gunn to bring together a set of 
Thunderbird stakeholders to discuss these options. I haven't fully 
discussed this part with the Council yet.


3. Finally, we've offered to accept donations for Thunderbird and 
disperse funds for contractors while we're figuring out this plan.


3.1 This makes MoFo, who already owns the Thunderbird IP, into a 'fiscal 
home' for the Thunderbird community during this period. We also play 
this role for Firebug.


3.2 We’re talking to at least one org who is considering supporting 
Thunderbird. We are also looking at adding a user donation function to 
support the Thunderbird community. We will likely also supplement this 
funding with some of our own resources in a small way.


Some of the items above could be done via MoCo (items 2, 2.2) or MoFo, 
and since I have a bit of energy to focus on this now, Mitchell and I 
agreed we should take advantage of this energy. Other items make much 
more sense to be handled from the Foundation (item 3).


I'm not sure where all this leads -- but I am certain that we need to 
invest some time and resources in figuring out a good future for 
Thunderbird. That's what I've offered to help with.


If people have questions or want to somehow help out themselves, I'd be 
happy to discuss.


ms

On 2015-11-30 4:11 PM, Mitchell Baker wrote:
This is a long-ish message. It covers general topics about Thunderbird 
and the future, and also the topics of the Foundation involvement 
(point 9) and the question of merging repositories (point 11).   
Naturally, I believe it’s worth the time to read through the end.


1. Firefox and Thunderbird have lived with competing demands for some 
time now. Today Thunderbird developers spend much of their time 
responding to changes made in core Mozilla systems and technologies. 
At the same time, build, Firefox, and platform engineers continue to 
pay a tax to support Thunderbird.


2. These competing demands are not good for either project. Engineers 
working on Thunderbird must focus on keeping up and adapting Firefox’s 
web-driven changes. Engineers working on Firefox and related projects 
end up considering the competing demands of Thunderbird, and/or 
wondering if and how much they should assist Thunderbird. Neither 
project can focus wholeheartedly on what is best for it.


3. These competing demands will not get better soon. Instead, they are 
very likely to get worse. Firefox and related projects are now 
speeding up the rate of change, modernizing our development process 
and our infrastructure. Indeed, this is required for Mozilla to have 
significant impact in the current computing environment.


4. There is a belief among some that living with these competing 
demands is good for the Mozilla project as a whole, because it gives 
us an additional focus, assists Thunderbird as a dedicated open source 
community, and also supports an open source standards based email 
client. This sentiment is appealing, and I share it to some extent. 
There is also a sense that caring for fellow open source developers is 
good, which I also share.  However, point 2 above — “Neither project 
can focus wholeheartedly on what is best for it” -- is the most 
important point. Having Thunderbird has an additional product and 
focus is *not* good overall if it causes all of our products — 
Firefox, other web-driven products and Thunderbird — to fall short of 
what we can accomplish.


5.  Many inside of Mozilla, including an overwhelming majority of our 
leadership, feel the need to be laser-focused on activities like 
Firefox that can have an industry-wide impact.With all due respect 
to Thunderbird and the Thunderbird community, we have 

Re: Using the Taskcluster index to find builds

2015-11-30 Thread Ryan VanderMeulen

On 11/30/2015 3:43 PM, Chris AtLee wrote:

The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
work behind the scenes over the past few months to migrate the backend
storage for builds from the old "FTP" host to S3. While we've tried to make
this as seamless as possible, the new system is not a 100% drop-in
replacement for the old system, resulting in some confusion about where to
find certain types of builds.

At the same time, we've been working on publishing builds to the
Taskcluster Index [1]. This service provides a way to find a build given
various different attributes, such as its revision or date it was built.
Our plan is to make the index be the primary mechanism for discovering
build artifacts. As part of the ongoing buildbot to Taskcluster migration
project, builds happening on Taskcluster will no longer upload to
https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut off
platforms in buildbot, the index will be the only mechanism for discovering
new builds.

I posted to planet Mozilla last week [2] with some more examples and
details. Please explore the index, and ask questions about how to find what
you're looking for!

Cheers,
Chris

[1] http://docs.taskcluster.net/services/index/
[2] http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html

If I understand correctly, Taskcluster builds are only archived for one 
year, whereas we have nightly archives going back 10+ years now. What 
are our options for long-term archiving in this setup?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Thunderbird, the future, mozilla-central and comm-central

2015-11-30 Thread Mark Surman


Hi all

As a follow on to Mitchell’s post, I want to outline more specifically 
how the Foundation got involved and the ways in which I believe the 
Foundation can assist in this situation.


Mitchell and I have had a number of discussions regarding Thunderbird. 
The Thunderbird Council has also come to each of us at various times. We 
agree it could be helpful for some of the Foundation's capabilities to 
be part of this work. Specifically, I’ve put forward an offer of 
Foundation staff time and resources to:


1. Advise and support the Council as they come up with a plan. Mitchell, 
myself and many at the Foundation care about the long term health of 
Thunderbird and feel some responsibility to help get it to a good spot.


2. Beyond time, we’ve offered the Council a modest amount of money to 
pay for contractors who can help develop options for both the 
organizational and technical future of Thunderbird.


2.1 As Mitchell said, this *does not* mean that MoFo is making technical 
decisions about Thunderbird -- just that we want to make sure the 
Council has access a technical architect, a business planner, etc. to 
generate plans and options that the community can consider together.


2.2 As part of this, we’ve also (loosely) offered MoFo's meeting 
facilitation team run by Allen Gunn to bring together a set of 
Thunderbird stakeholders to discuss these options. I haven't fully 
discussed this part with the Council yet.


3. Finally, we've offered to accept donations for Thunderbird and 
disperse funds for contractors while we're figuring out this plan.


3.1 This makes MoFo, who already owns the Thunderbird IP, into a 'fiscal 
home' for the Thunderbird community during this period. We also play 
this role for Firebug.


3.2 We’re talking to at least one org who is considering supporting 
Thunderbird. We are also looking at adding a user donation function to 
support the Thunderbird community. We will likely also supplement this 
funding with some of our own resources in a small way.


Some of the items above could be done via MoCo (items 2, 2.2) or MoFo, 
and since I have a bit of energy to focus on this now, Mitchell and I 
agreed we should take advantage of this energy. Other items make much 
more sense to be handled from the Foundation (item 3).


I'm not sure where all this leads -- but I am certain that we need to 
invest some time and resources in figuring out a good future for 
Thunderbird. That's what I've offered to help with.


If people have questions or want to somehow help out themselves, I'd be 
happy to discuss.


ms

On 2015-11-30 4:11 PM, Mitchell Baker wrote:
This is a long-ish message. It covers general topics about Thunderbird 
and the future, and also the topics of the Foundation involvement 
(point 9) and the question of merging repositories (point 11).   
Naturally, I believe it’s worth the time to read through the end.


1. Firefox and Thunderbird have lived with competing demands for some 
time now. Today Thunderbird developers spend much of their time 
responding to changes made in core Mozilla systems and technologies. 
At the same time, build, Firefox, and platform engineers continue to 
pay a tax to support Thunderbird.


2. These competing demands are not good for either project. Engineers 
working on Thunderbird must focus on keeping up and adapting Firefox’s 
web-driven changes. Engineers working on Firefox and related projects 
end up considering the competing demands of Thunderbird, and/or 
wondering if and how much they should assist Thunderbird. Neither 
project can focus wholeheartedly on what is best for it.


3. These competing demands will not get better soon. Instead, they are 
very likely to get worse. Firefox and related projects are now 
speeding up the rate of change, modernizing our development process 
and our infrastructure. Indeed, this is required for Mozilla to have 
significant impact in the current computing environment.


4. There is a belief among some that living with these competing 
demands is good for the Mozilla project as a whole, because it gives 
us an additional focus, assists Thunderbird as a dedicated open source 
community, and also supports an open source standards based email 
client. This sentiment is appealing, and I share it to some extent. 
There is also a sense that caring for fellow open source developers is 
good, which I also share.  However, point 2 above — “Neither project 
can focus wholeheartedly on what is best for it” -- is the most 
important point. Having Thunderbird has an additional product and 
focus is *not* good overall if it causes all of our products — 
Firefox, other web-driven products and Thunderbird — to fall short of 
what we can accomplish.


5.  Many inside of Mozilla, including an overwhelming majority of our 
leadership, feel the need to be laser-focused on activities like 
Firefox that can have an industry-wide impact.With all due respect 
to Thunderbird and the Thunderbird community, we have 

Re: Using the Taskcluster index to find builds

2015-11-30 Thread Nicholas Alexander
On Mon, Nov 30, 2015 at 12:43 PM, Chris AtLee  wrote:

> The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
> work behind the scenes over the past few months to migrate the backend
> storage for builds from the old "FTP" host to S3. While we've tried to make
> this as seamless as possible, the new system is not a 100% drop-in
> replacement for the old system, resulting in some confusion about where to
> find certain types of builds.
>
> At the same time, we've been working on publishing builds to the
> Taskcluster Index [1]. This service provides a way to find a build given
> various different attributes, such as its revision or date it was built.
> Our plan is to make the index be the primary mechanism for discovering
> build artifacts. As part of the ongoing buildbot to Taskcluster migration
> project, builds happening on Taskcluster will no longer upload to
> https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut
> off
> platforms in buildbot, the index will be the only mechanism for discovering
> new builds.
>
> I posted to planet Mozilla last week [2] with some more examples and
> details. Please explore the index, and ask questions about how to find what
> you're looking for!
>

Also FWIW -- the Taskcluster Index has been backing |mach artifact| since
day one.  It's nice!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why do we flush layout in nsDocumentViewer::LoadComplete?

2015-11-30 Thread Boris Zbarsky

On 11/30/15 4:09 PM, Robert O'Callahan wrote:

I think we should just grind through reftest/crashtests and add an explicit
flush to every onload/load event handler.


There may be mochitests affected too.  At least the ones using window 
screenshots, but possibly some other ones that aim to not trigger fatal 
assertions...


Maybe there are few enough that we can just look at the ones doing 
screenshotting and ignore the others.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Gijs Kruitbosch
We have data on pre-signing add-ons that we consider malware, but we 
have no way of knowing (structurally, besides incidental reports on 
bugzilla with the malware uploaded) the contents of the XPIs in question 
and/or whether they would have passed the validator - they wouldn't go 
through the validator, because they would have been distributed outside 
of AMO (front- or sideloaded - either way we would not have source code).


So really, nobody has data on what will happen in a post-signing world. 
There's an interesting question about how much the pre-signing system 
can predict what will happen here, but it's sadly not as clear-cut as 
you hope.


~ Gijs

On 28/11/2015 19:30, Kartikaya Gupta wrote:

So it seems to me that people are actually in general agreement about
what the validator can and cannot do, but have different evaluations
of the cost-benefit tradeoff.

On the one hand we have the camp (let's say camp A) that believes the
validator provides negligible actual benefit, because it is trival to
bypass, but at the same time provides a huge cost to add-on
developers. And on the other hand we have the camp ("camp B") that
believes the validator provides some non-negligible benefit, even
though it may significantly increase the cost to add-on developers.


From what I have been told from multiple people, Mozilla does have

actual data on the type and number of malicious add-ons in the wild,
and it cannot be published. I don't really like this since it goes
against openness and whatnot, but I can accept that there are
legitimate reasons for not publishing this data. So the question is -
do the people in camp A or the people in camp B have access to this
data? I would argue that whoever has access to the data is in a better
position to make the right call with respect to the cost-benefit
tradeoff, and everybody else should defer to them. If people in both
camps have access to the data, then clearly they have different
interpretations of the data and they should discuss it further.
Presumably they know who they are.

kats


On Sat, Nov 28, 2015 at 10:35 AM, Eric Rescorla  wrote:

On Sat, Nov 28, 2015 at 2:06 AM, Gijs Kruitbosch 
wrote:


On 27/11/2015 23:46, dstill...@zotero.org wrote:


The issue here is that this new system -- specifically, an automated
scanner sending extensions to manual review -- has been defended by
Jorge's saying, from March when I first brought this up until
yesterday on the hardening bug [1], that he believes the scanner can
"block the majority of malware".



Funny how you omit part of the quote you've listed elsewhere, namely:
"block the majority of malware, but it will never be perfect".

You assert the majority of malware will be 'smarter' than the validator
expects (possibly after initial rejection) and bypass it. Jorge asserts,
from years of experience, that malware authors are lazy and the validator
has already been helpful, in conjunction with manual review.



Did Jorge in fact assert that that as a matter of fact or as a matter of
opinion?
Maybe I missed it.

This seems like an empirical question. how many pieces of obvious malware
(in the sense that once the functionality is found it's clearly malicious
code
as opposed to a mistake, not in the sense that it's easy to find the
functionality)
have been found by the review process? How many pieces of obvious malware
(in the sense above) have passed the review process or otherwise been
found in the wild?

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Gijs Kruitbosch

On 28/11/2015 19:42, Dan Stillman wrote:

On 11/28/15 5:06 AM, Gijs Kruitbosch wrote:

On 27/11/2015 23:46, dstill...@zotero.org wrote:

The issue here is that this new system -- specifically, an automated
scanner sending extensions to manual review -- has been defended by
Jorge's saying, from March when I first brought this up until
yesterday on the hardening bug [1], that he believes the scanner can
"block the majority of malware".


Funny how you omit part of the quote you've listed elsewhere, namely:
"block the majority of malware, but it will never be perfect".

You assert the majority of malware will be 'smarter' than the
validator expects (possibly after initial rejection) and bypass it.
Jorge asserts, from years of experience, that malware authors are lazy
and the validator has already been helpful, in conjunction with manual
review. It's not helpful to say that what Jorge is saying is "not
true" - you mean different things when you say "the majority of malware".


I've addressed this repeatedly. In my view, saying "it will never be
perfect" is a misleading statement that betrays a misunderstanding of
the technical issues.


The validator has been used for years for AMO-published add-ons. It is 
in that context that it did what it did, and it did so reasonably well - 
there was always manual review for what the validator didn't catch. Not 
all add-ons would be published on AMO, and there was no reason for 
malware authors to publish on AMO except if they wanted to frontload it 
and have the slightly-less-bumpy install flow that AMO offers because of 
its default whitelist status in terms of sources of XPIs. Some people 
did try this. I have no comprehensive data or anything, but I do not 
believe that there was much if any such malware that made it to "fully 
reviewed" status on AMO.


In that context, saying that the validator blocked the majority of 
malware and would never be perfect was a perfectly valid statement and 
does not imply any lack of technical understanding.


The change that muddies the waters here is that we're now signing 
add-ons, and we're signing the ones that aren't distributed on AMO. We 
do not have source code for non-AMO distributed add-ons pre-signing, and 
so we have no way of knowing how much of that malware would or would not 
be picked up by the validator as-is. It seems likely that, as-is, a lot 
of it would be, because they have had no cause to use any techniques of 
circumvention. It seems equally likely that they would proceed to use 
such techniques in order to get signed anyway.


IOW, I think you're both right, and it would be helpful if you stopped 
attacking Jorge and other folks because I don't think that is a 
constructive discussion to be having.



If the person who's been defending this system for the last year


Jorge does not make decisions in a vacuum, wasn't the only person who 
architected the current solution, isn't some kind of lone dictator, 
doesn't get to decide what the Firefox team does (the team that actually 
implemented signing on the browser side), and he isn't the only person 
to have "defended the system" for whatever definition of system you're 
using here (it's not clear) -- and really, as I have pointed out before, 
even if he was all those things, it would not make "the system" any 
better or worse. Stop making this all about Jorge and his supposed lack 
of understanding. As I have repeatedly said, it is not helpful.




As for what, if anything, should block release without override, I'm
happy to talk specifics, but we can't have a discussion about that
without even agreeing on the point of the validator,


Why do we have to agree (and who are 'we'?) on the 'point' of the 
validator? As it is, the people in this discussion without access to the 
validator's results for zotero (which is almost everybody) have no idea 
what you're running into and which things are bothering you because they 
flag up false positives. For all we know you obfuscate all your code and 
use eval(btoa(...)) all over the place. More seriously, all I remember 
explicitly being mentioned is assigning to innerHTML in documents that 
are not (and won't be) in any docshell and therefore shouldn't be 
exploitable. It would be good to get a broader idea of the issues you're 
actually running into (as I'm assuming there are more than just this 
one, particularly because this one would be pretty easy to fix on your 
side, as you say the line in question only runs under other browsers).


In any case, if you want to insist, here's my view: the point of the 
validator is to raise the bar for both malware and for trivially-found 
security holes in otherwise benign (or seemingly benign / greyware, if 
you assume collusion between the 'benign' add-on submitted and the 
website that will use that add-on's security hole for privilege 
escalation of their website / remote code) add-ons. Raising the bar is 
helpful so that editors don't waste time reviewing script kiddie 

Re: Intent to ship/unprefix: Canvas2D imageSmoothingEnabled

2015-11-30 Thread Ms2ger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/29/2015 04:34 PM, Florian Scholz wrote:
> Hi,
> 
> we intend to ship ctx.imageSmoothingEnabled and add a deprecation 
> warning for ctx.mozImageSmoothingEnabled.
> 
> This is a Canvas2D context property which controls the
> interpolation of images drawn to 2D canvas.
> 
> Bug https://bugzilla.mozilla.org/show_bug.cgi?id=768072
> 
> Spec
> https://html.spec.whatwg.org/multipage/scripting.html#image-smoothing
>
> 
MDN
> https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContex
t2D/imageSmoothingEnabled
>
> 
> 
> Blink and WebKit unprefixed awhile ago, too: 
> https://code.google.com/p/chromium/issues/detail?id=277199 
> https://bugs.webkit.org/show_bug.cgi?id=147803

Do we have a test suite that shows our implementation is interoperable
with Blink's and WebKit's?

Thanks
Ms2ger

-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWXBESAAoJEOXgvIL+s8n2NcEH/0YsqESTN+lOYckFADa27c1D
JyL3pTFuAoLDqiIO9WXZVMe15rCZp9r5HJsDmqhYPNkIk5Mnn9eZXjPsEo8Pi+uD
aSwip1Cf5U0pn7DDeAUYOsEj8JVkwbrJhb2DUzmq0PNUUcHYEBG0hlmf8W6obdvU
BhWAKCV7OwK1JF+Zw6qEjFtams25syo3VmB3rgclzBqi0DZ7CRMCUnzbJgcwnkSN
QstbKcTGT/E62bEqW+Xxj0Zva3CEqWbTiub9V/iGimN4aq2vQRjuo/PzsNqcx+cN
sG01RqqZSmCDC1lW+W0HGz2vFpOfredEdKgc00fxzhJ1TJFm077OUgULTrAFD/w=
=HVw2
-END PGP SIGNATURE-
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Gijs Kruitbosch

Yes. See bug 1150859 and friends.

~ Gijs

On 30/11/2015 09:05, Tim Guan-tin Chien wrote:

The Gecko JavaScript is also littered with #ifdef and # is really not a
token for comment in JS... is there any plan to migrate that away since
there is ESLint present?

On Sun, Nov 29, 2015 at 10:37 PM, Vivien Nicolas 
wrote:


On Sun, Nov 29, 2015 at 2:30 PM, David Bruant  wrote:


Hi,

Just a drive-by comment to inform folks that there is an effort to
transition Mozilla JavaScript codebase to standard JavaScript.
Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617

And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about
removing non-standard features from SpiderMonkey.
Of course this can rarely be done right away and most often requires
dependent bugs to move code to standard ECMAScript (with a period with
warnings about the usage of the non-standard feature).



What about .jsm modules ? Or is that not really considered ?

I have been told that ES6 modules may help to solve some of the problems
covered by .jsm but I don't see how you can create a ES6 module that is can
be accessed from multiple js context from the same origin. Mostly
interested as it would be nice to be able to write a module once and share
it between multiple tabs instead of having to reload the same JS script for
all similar tabs, like all the bugzilla tabs many of us have open for
example.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Tim Guan-tin Chien
Thanks, bug 1150859 only covers ./browser/.
Is this a done deal which we want to get rid of #ifdef in all JS code
everywhere?

(My particular interest would be obviously ./dom/ and ./b2g/)

On Mon, Nov 30, 2015 at 5:44 PM, Gijs Kruitbosch 
wrote:

> Yes. See bug 1150859 and friends.
>
> ~ Gijs
>
> On 30/11/2015 09:05, Tim Guan-tin Chien wrote:
>
>> The Gecko JavaScript is also littered with #ifdef and # is really not a
>> token for comment in JS... is there any plan to migrate that away since
>> there is ESLint present?
>>
>> On Sun, Nov 29, 2015 at 10:37 PM, Vivien Nicolas 
>> wrote:
>>
>> On Sun, Nov 29, 2015 at 2:30 PM, David Bruant  wrote:
>>>
>>> Hi,

 Just a drive-by comment to inform folks that there is an effort to
 transition Mozilla JavaScript codebase to standard JavaScript.
 Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617

 And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about
 removing non-standard features from SpiderMonkey.
 Of course this can rarely be done right away and most often requires
 dependent bugs to move code to standard ECMAScript (with a period with
 warnings about the usage of the non-standard feature).


>>> What about .jsm modules ? Or is that not really considered ?
>>>
>>> I have been told that ES6 modules may help to solve some of the problems
>>> covered by .jsm but I don't see how you can create a ES6 module that is
>>> can
>>> be accessed from multiple js context from the same origin. Mostly
>>> interested as it would be nice to be able to write a module once and
>>> share
>>> it between multiple tabs instead of having to reload the same JS script
>>> for
>>> all similar tabs, like all the bugzilla tabs many of us have open for
>>> example.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Ehsan Akhgari

On 2015-11-28 2:06 AM, Gavin Sharp wrote:

The assumption that the validator must catch all malicious code for
add-on signing to be beneficial is incorrect, and seems to be what's
fueling most of this thread.


It would be really helpful if we can get past defending the add-on 
validator; the only thing that everyone is this thread seems to agree on 
is the list of things it is capable of doing.


The problem is how we're using it seems to not make sense according to 
what it can and does do.


> Validation being a prerequisite for

automatic signing is not primarily a security measure, but rather just a
way of eliminating "obvious" problems (security-related or otherwise)
from installed and enabled add-ons generally.


Successful validation is currently not merely a prerequisite for 
automatic signing of non-AMO add-ons, it is also a sufficient condition. 
 Let me repeat the part of my previous response which you didn't reply to:


"The specific problem here is that we allow automatic signing of 
extensions once they pass the add-on validator checks, and we allow our 
users to run signed extensions without any other checks.  Therefore, the 
current system is vulnerable to attacks such as what Dan's PoC extension 
has demonstrated."


Perhaps that is not what was supposed to happen, but we're doing this 
for a fact, and it's definitely the wrong thing to do.


> With add-on singing fully

implemented, if (when) malicious add-ons get automatically signed,
you'll have several more effective tools to deal with them, compared to
the status quo.

Gavin

On Nov 27, 2015, at 8:49 PM, Eric Rescorla > wrote:




On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari
> wrote:

On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp
> wrote:

> On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham > wrote:
> > But the thing is, members of our security group are now piling into the
> > bug pointing out that trying to find malicious JS code by static code
> > review is literally _impossible_ (and perhaps hinting that they'd have
> > said so much earlier if someone had asked them).
>
> No, that's not right. There's an important distinction between
> "finding malicious JS code" and "finding _all_ malicious JS code". The
> latter is impossible, but the former isn't.
>

Note that malicious code here might look like this:

  console.log("success");

It's impossible to tell by looking at the code whether that line
prints a
success message on the console, or something entirely different,
such as
running calc.exe.

A better framing for the problem is "finding some arbitrary
instances of
malicious JS code" vs "finding malicious JS code".  My point in
the bug and
in the discussions prior to that was that a static checker can
only do the
former, and as such, if the goal of the validator is finding malicious
code, its effectiveness is bound to be a lint tool at best.


Indeed.  And if the validator is publicly accessible, let alone has public
source code, it's likely to be straightforward for authors of malicious
code to evade the validator. All they need to do is run their code
through the validator, see what errors it spits out, and modify the
code until it no longer spits out errors.

Again, this goes back to threat model. If we're trying to make it easier
for authors to comply with our policies (and avoid writing problematic
add-ons), then a validator seems reasonable. However, if we're trying
to prevent authors of malicious add-ons from getting their add-ons
through, that seems much more questionable, for the reasons listed above.
However, once we accept that we can't stop authors who are trying
to evade detection, then treating it as a linter and allowing authors
to override it seems a lot more sensible.

-Ekr



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-30 Thread Ehsan Akhgari

On 2015-11-28 8:28 PM, Mike Hoye wrote:

On 2015-11-28 2:40 PM, Eric Rescorla wrote:

How odd that your e-mail was in response to mine, then.


Thanks, super helpful, really moved the discussion forward, high five.

To Ehsan's point that "malicious code here might look like this:
console.log("success"); [and] It's impossible to tell by looking at the
code whether that line prints a success message on the console, or
something entirely different, such as running calc.exe." - that's true,
but it also looks a lot like the sort of problem antivirus vendors have
been dealing with for a long time now. Turing completeness is a thing,
the halting problem exists and monsters are real, sure, but that doesn't
mean having antivirus software is a waste of time that solves no
problems and protects nobody.


As others have pointed, your antivirus analogy is really irrelevant 
here.  Also, may I suggest that starting to say things such as "Turing 
completeness is a thing... and monsters are real" in the discussion 
related to an actual security issue trivializes the discussion to a 
point where important issues will get ignored, as I've seen happen a few 
times before in this thread?



One key claim Stillman made, that  " A system that takes five minutes to
circumvent does not “raise the bar” in any real way", is perhaps true in
an academic sense, but not in a practical one. We know a lot more than
we did a decade ago about the nature of malicious online actors, and one
of the things we know for a fact is the great majority of malicious
actors on the 'net are - precisely as Jorge asserts - lazy, and that
minor speedbumps - sometimes as little as a couple of extra clicks - are
an effective barrier to people who are doing whatever it is they're
about to do because they're bored and it's easy. And that's most of them.


I agree with Jonas about this.  Even if all of the malware we have seen 
on AMO so far have been stuff done by script kiddies, the right way to 
think about this is "maybe we've not seen the more sophisticated ones." 
 It would be terrible to base the security of our add-on ecosystem on 
assumptions about the laziness of the malicious actors.


(Also, anecdotally, some of the exploit code against Firefox from web 
pages that I have seen myself is among the most sophisticated code and 
tricks I've seen in my career so far.)



Any semicompetent locksmith can walk through your locked front door
without breaking stride, but you lock it anyway because keeping out
badly-raised teenagers is not "security theater", it's sensible,
cost-effective risk management.


Please see my reply to Gavin on Friday?  To fit the status quo with this 
analogy, we're currently copying keys to our front door to strangers 
that successfully fill out a questionnaire.


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform