Hi,

I am really happy to hear that. As you know, embedded systems are our
focus, and these changes would make JSC more robust on them. We worked on
handling out-of-memory situations before, but that time our work was not
appreciated since it introduced complexity.

Sometimes I monitoring the memory consumption of WebKit components, and
the generated code and collector's heap often consumes the most memory in
JSC (could be used as a starting point for handling oom situations).

One more question: is your oom JS exception is defined by the ecma
standard or a custom one? If the latter, is it compatible with other
browsers?

Cheers,
Zoltan

> On Aug 3, 2010, at 12:34 PM, Zoltan Herczeg wrote:
>
>> Hi,
>>
>> I saw a lot of paches about changing the memory allocation behaviour of
>> JavaScriptCore, and would like to start a discussion about the long term
>> purpose of these changes. If I understand corectly, the aim is to limit
>> the memory consumption of JavaScripCore to a certain level, and keep the
>> browser alive when a bad code tries to allocate a huge memory area and
>> the
>> browser forced to crash (is it?).
>
> Hey Zoltan,
>
> Currently I don't think we have any goals to constrain JavaScriptCore's
> memory usage in quite the way you suggest – though I don't think it's a
> bad idea at all.
>
> We really have two things going on re memory at the minute.  One is to
> move all JIT code into a single VM allocation (but only on the platforms
> where it makes sense to do so), and another is to investigate copying
> collection.
>
> We currently use a single VM allocation for JIT code buffers on x86-64,
> and by doing so we can ensure that all JIT code falls within a 2Gb region,
> and as such all branches within JIT code can be linked with ±31bit
> relative offsets.  There are further advantages to the approach of
> performing a single VM allocation up front.  This may be faster on some
> platforms than frequent calls to mmap/munmap for individual pools, and
> some platforms may be limited on the number of such allocation that can be
> performed (e.g. on Symbian on old ARMs are limited number of RChunks can
> be created).  Whilst I don't expect all platforms to switch to this form
> of memory allocation, I do expect it to be used further in the future
> (particularly on platforms like ARM, where the limited branch ranges may
> mean that it is useful to try to keep JIT code allocations close
> together).  The fixed pool allocator does come at a cost that it does
> introduce a new resource cap that web pages could hit through heavy use,
> hence current work in progress to be able to at least make failure in
> these cases be more graceful than a crash.
>
>> In practice we could build a sandbox around JavaScriptCore (maybe one
>> for
>> each page) and keep every allocation there isn'it? This pobably mmap-ed
>> region could be used by ExecutableAllocator, GarbageCollector, and by
>> other regular allocations.
>
> We currently try to prevent runaway JavaScript code from crashing JSC by
> allocating too much memory.  There is a question here of what I mean by
> 'runaway', and what qualifies as 'too much'. :-)
>
> Our current definition of 'too much' is that we allow you to allocate up
> to the point that a malloc actually fails, at which point, in cases where
> a memory allocation is very directly triggered by JS code execution (e.g.
> a string or array growing too large) we will throw a JS exception, rather
> than allowing a CRASH() to take the whole browser down.  This certainly
> seems an improvement over just crashing on pages with huge datasets, but
> there seem to be two glaring weaknesses.  It certainly seems questionable
> whether we should really allow you to get all the way to the point of
> malloc failing before we think you've used enough memory – most people
> would probably prefer their web pages could not exhaust system resources
> to that level.  Secondly, we only recover gracefully in cases where memory
> is allocated directly from JS actions, which probably guards against
> crashes in a lot of cases where we're just dealing with a program with a
> large dataset – but in cases of malicious code an attacker could exhaust
> most memory with large JS objects (long strings, arrays), then
> deliberately perform actions that require allocation of internal object to
> try to trigger a CRASH.
>
> A better memory sandbox for JSC, that capped memory usage at a more
> reasonable level, and that protected against memory exhaustion from
> allocation of internal data structures too, would certainly sound like it
> has potential to be a good thing.  To make JSC fully robust from malicious
> code crashing the browser by exhausting memory would probably mean changes
> to WebCore too, since an attacker wanting to crash the browser could make
> DOM calls that allocate data structures until we ran out of memory in
> WebCore and hit a CRASH – but even without this, as a JSC only change,
> this may be an improvement.
>
>> I also saw a patch about moving garbage collector, that is also an
>> interesting area.
>
> Yes, a very interesting area, that Nathan has been doing some great
> initial work on.  The work on a copying collector is larger and longer
> term project, and is primarily motivated by performance.  A copying
> collector is a step towards a generational collector, and with it smaller
> GC pauses and hopefully less marking leading to lower overall GC overhead.
>
> cheers,
> G.
>
>>
>> Regards,
>> Zoltan
>>
>>
>> _______________________________________________
>> squirrelfish-dev mailing list
>> [email protected]
>> http://lists.webkit.org/mailman/listinfo.cgi/squirrelfish-dev
>
>


_______________________________________________
squirrelfish-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/squirrelfish-dev

Reply via email to