On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi <marc.fa...@gmail.com> wrote:
> So there is no way for an unsigned script to exploit security holes in a
> signed script?
Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.

> Funny you mention crypto currencies as an idea to get inspiration
> from..."Trust but verify" is detached from that... a browser can monitor
> what the signed scripts are doing and if it detects a potentially malicious
> pattern it can halt the execution of the script and let the user decide if
> they want to continue...
That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard. Failing that, you can monitor
what APIs a piece of software makes use of, and restrict access to those.
However, that's already satisfied without signing by sandboxing.
Furthermore, it doesn't entirely solve the problem as any android user will
know. You get a ginormeous list of premissions a given piece of software
would like to use and the user just clicks "yes". Alternatively, you get
malware that's not trustworthy, that nobody managed to properly review,
because the non trusty part was burried/hidden by the author somewhere deep
down, to activate only long after trust extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable model,
what do you define as "malicious"? Reformatting your machine would be
malicious, but so would be posting on your facebook wall. What constitutes
a malicious pattern is actually more of a social than a technical problem.

Reply via email to