From what our security folks tell me, even taint analyses that sometimes
drop labels due to simplifications like failing to taint the PC are still
valuable. The code under analysis isn't hostile, at least in the use cases
that I'm familiar with, so it's not going out of its way to launder text.
Also, any dynamic analysis is vulnerable to limited test coverage: we're
sampling executions for vulnerabilities, which can prove their presence but
not their absence. A conservative approximation to ideal taint is good
enough, since the sampling methodology will cause us to miss vulnerabilities
anyway.
"Perfect mumble good yadda yadda."
(That's not to say I wouldn't welcome links to better techniques of
comparable simplicity...)
-----Original message-----
From: Brendan Eich <bren...@mozilla.com>
To: Nicholas Nethercote <n.netherc...@gmail.com>
Cc: Jim Blandy <jbla...@mozilla.com>, Ivan Alagenchev
<ialagenc...@mozilla.com>, dev-tech-js-engine-internals@lists.mozilla.org,
Mark Goodwin <mgood...@mozilla.com>, Dave Herman <dher...@mozilla.com>
Sent: 2013 Aug, Sat, 10 00:59:55 GMT+00:00
Subject: Re: [JS-internals] Taint analysis in SpiderMonkey
Perl's tainting was unsound (they didn't taint the pc, so vulnerable to
implicit flows [Denning, 1976 IIRC]).
Tainting the pc leads to label creep without some static or hybrid
analysis to help untaint at control flow joins. My Netscape 3
experimental data tainting security implementation did not untaint, and
suffered fatal label creep -- this negative result cemented the
same-origin policy in browsers, which I'd been patching like a monkey
since the beginning, and which only gained sound implementation with
wrappers much later (jst, mrbkap and me in 2004-5; other browsers even
later). OCap FTW!
John Mitchell's browser-grad-student-research kids, now all growed up at
Google (Adam Barth) and CMU West / Apportable (Collin Jackson), are down
on information-flow security, due to label creep or the complex analyses
required to avoid it. I tend to agree but have a hunch it will rise again.
It turns out dherman is working with a grad student doing a taint model;
cc'ing him.
/be
Nicholas Nethercote wrote:
On Fri, Aug 9, 2013 at 2:59 PM, Jim Blandy<jbla...@mozilla.com> wrote:
The taint analysis applies to strings only, and has four parts:
* It identifies certain *sources* of strings as "tainted":
document.URL, input fields, and so on.
* The JavaScript engine propagates taint information on strings.
Taking a substring of a tainted string, or concatenating a tainted
string, yields a tainted string. Regexp operations, charAt, and so
on all propagate taint information. And so on.
* It identifies certain *sinks* as vulnerable: eval, 'src' attributes
on script elements, and so on.
* Finally, the tool's user interface logs the appearance of tainted
strings at vulnerable sinks. The taint metadata actually records the
provenance of each region of a tainted string, so the tool can
explain exactly why the final string is tainted, which is really
helpful in constructing XSS attacks.
Are there any operations that are considered to untaint tainted
strings? I seem to remember that in Perl's taint mode a regexp
search-and-replace operation untaints, but I could be wrong.
Nick
_______________________________________________
dev-tech-js-engine-internals mailing list
dev-tech-js-engine-internals@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals
_______________________________________________
dev-tech-js-engine-internals mailing list
dev-tech-js-engine-internals@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals