On Wed, Jun 1, 2022 at 5:17 PM 'Łukasz Anforowicz' via v8-dev <
[email protected]> wrote:

> Benefit of full JS parse over a list of known non-JS prefixes: Stricter
> is-it-JS checking = more non-JS things get blocked = improved security.
> Still, there is a balance here - some heuristics (like the ones proposed by
> Daniel) are almost as secure as full JS parse (while being easier to
> implement and having less of a performance impact).
>

Makes sense, I'm just asking to make sure that we strike the right balance
between security improvements and complexity/performance issues; even a JS
tokenizer without a full parser is quite a complexity investment (it needs
e.g. a full regexp parser), plus the language grammar is sufficiently broad
that I expect exhaustively enumerating all possible combinations of even
just 3-5 tokens to be prohibitively large (setting aside maintainability in
the face of ever-updating standards).

Do we have a measure of how much non-JS coverage the current heuristics
give, on real-world examples of JSON files? Or perhaps, a measure of how
many different prefixes there are that we could blocklist? Do we know at
what point the improved security has diminishing returns?

- Leszek

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-dev/CAGRskv9UUNJ9sjW0FvuHyCN90j%3DfbafSOgGVBG19qRe19_%2BO5w%40mail.gmail.com.

Reply via email to