How would an unsigned script be able to exploit functionality from a signed script if it's an either/or case - you have either all scripts signed or no extended features? and: Think about this: a website can be totally safe today and deliver exploits tomorrow without the user even noticing. It happened before and it will happen again. Signed content would prevent this by warning the user about missing or wrong signatures - even if signed script would not add a single extended function. I understand that signing code does not lead to the solution of all evils. But it would add another layer that needs to be broken if somebody gains access to a website and starts to modify code.

Michaela

On 11/19/2014 11:14 AM, Marc Fawzi wrote:
<<

    So there is no way for an unsigned script to exploit security
    holes in a signed script?

Of course there's a way. But by the same token, there's a way a signed script can exploit security holes in another signed script. Signing itself doesn't establish any trust, or security.
>>

Yup, that's also what I meant. Signing does not imply secure, but to the average non-technical user a "signed app from a trusted party" may convey both trust and security, so they wouldn't think twice about installing such a script even if it asked for some powerful permissions that can be exploited by another script.

<<

    Funny you mention crypto currencies as an idea to get inspiration
    from..."Trust but verify" is detached from that... a browser can
    monitor what the signed scripts are doing and if it detects a
    potentially malicious pattern it can halt the execution of the
    script and let the user decide if they want to continue...

That's not working for a variety of reasons. The first reason is that identifying what a piece of software does intelligently is one of those really hard problems. As in Strong-AI hard.
>>

Well, the user can setup the rules of what is considered a malicious action and that there would be ready made configurations (best practices codified in config) that would be the default in the browser. And then they can exempt certain scripts.

I realize this is an open ended problem and no solution is going to address it 100% ... It's the nature of open systems to be open to attacks but it's how the system deals with the attack that differentiates it. It's a wide open area of research I think, or should be.

But do we want a security model that's not extensible and not flexible? The answer is most likely NO.





On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch <pya...@gmail.com <mailto:pya...@gmail.com>> wrote:

    On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi <marc.fa...@gmail.com
    <mailto:marc.fa...@gmail.com>> wrote:

        So there is no way for an unsigned script to exploit security
        holes in a signed script?

    Of course there's a way. But by the same token, there's a way a
    signed script can exploit security holes in another signed script.
    Signing itself doesn't establish any trust, or security.

        Funny you mention crypto currencies as an idea to get
        inspiration from..."Trust but verify" is detached from that...
        a browser can monitor what the signed scripts are doing and if
        it detects a potentially malicious pattern it can halt the
        execution of the script and let the user decide if they want
        to continue...

    That's not working for a variety of reasons. The first reason is
    that identifying what a piece of software does intelligently is
    one of those really hard problems. As in Strong-AI hard. Failing
    that, you can monitor what APIs a piece of software makes use of,
    and restrict access to those. However, that's already satisfied
    without signing by sandboxing. Furthermore, it doesn't entirely
    solve the problem as any android user will know. You get a
    ginormeous list of premissions a given piece of software would
    like to use and the user just clicks "yes". Alternatively, you get
    malware that's not trustworthy, that nobody managed to properly
    review, because the non trusty part was burried/hidden by the
    author somewhere deep down, to activate only long after trust
    extension by fiat has happened.

    But even if you'd assume that this somehow would be an acceptable
    model, what do you define as "malicious"? Reformatting your
    machine would be malicious, but so would be posting on your
    facebook wall. What constitutes a malicious pattern is actually
    more of a social than a technical problem.



Reply via email to