I wasn't suggesting that you had made that incorrect assumption.

Gavin

On Sat, Nov 28, 2015 at 10:31 AM, Eric Rescorla <[email protected]> wrote:

> On Fri, Nov 27, 2015 at 11:06 PM, Gavin Sharp <[email protected]>
> wrote:
>
>> The assumption that the validator must catch all malicious code for
>> add-on signing to be beneficial is incorrect, and seems to be what's
>> fueling most of this thread.
>>
>
> I'm not sure how you got that out of my comments, since I explicitly said
> the
> opposite:"If we're trying to make it easier for authors to comply with our
> policies (and avoid writing problematic add-ons), then a validator seems
> reasonable"
>
>
> Validation being a prerequisite for automatic signing is not primarily a
>> security measure, but rather just a way of eliminating "obvious" problems
>> (security-related or otherwise) from installed and enabled add-ons
>> generally.
>>
>
> Sure, but the argument for it being a *hard* requirement is primarily a
> security
> one, and that's the one that falls afoul of the threat model point I made
> below.
>
>
>
>> With add-on singing fully implemented, if (when) malicious add-ons get
>> automatically signed, you'll have several more effective tools to deal with
>> them, compared to the status quo.
>>
>
> Yes.
>
> -Ekr
>
>
>
>
>
>> Gavin
>>
>> On Nov 27, 2015, at 8:49 PM, Eric Rescorla <[email protected]> wrote:
>>
>>
>>
>> On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari <[email protected]>
>> wrote:
>>
>>> On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp <[email protected]>
>>> wrote:
>>>
>>> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <[email protected]>
>>> wrote:
>>> > > But the thing is, members of our security group are now piling into
>>> the
>>> > > bug pointing out that trying to find malicious JS code by static code
>>> > > review is literally _impossible_ (and perhaps hinting that they'd
>>> have
>>> > > said so much earlier if someone had asked them).
>>> >
>>> > No, that's not right. There's an important distinction between
>>> > "finding malicious JS code" and "finding _all_ malicious JS code". The
>>> > latter is impossible, but the former isn't.
>>> >
>>>
>>> Note that malicious code here might look like this:
>>>
>>>   console.log("success");
>>>
>>> It's impossible to tell by looking at the code whether that line prints a
>>> success message on the console, or something entirely different, such as
>>> running calc.exe.
>>>
>>> A better framing for the problem is "finding some arbitrary instances of
>>> malicious JS code" vs "finding malicious JS code".  My point in the bug
>>> and
>>> in the discussions prior to that was that a static checker can only do
>>> the
>>> former, and as such, if the goal of the validator is finding malicious
>>> code, its effectiveness is bound to be a lint tool at best.
>>
>>
>> Indeed.  And if the validator is publicly accessible, let alone has public
>> source code, it's likely to be straightforward for authors of malicious
>> code to evade the validator. All they need to do is run their code
>> through the validator, see what errors it spits out, and modify the
>> code until it no longer spits out errors.
>>
>> Again, this goes back to threat model. If we're trying to make it easier
>> for authors to comply with our policies (and avoid writing problematic
>> add-ons), then a validator seems reasonable. However, if we're trying
>> to prevent authors of malicious add-ons from getting their add-ons
>> through, that seems much more questionable, for the reasons listed above.
>> However, once we accept that we can't stop authors who are trying
>> to evade detection, then treating it as a linter and allowing authors
>> to override it seems a lot more sensible.
>>
>> -Ekr
>>
>>
>
_______________________________________________
dev-platform mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to