On 2015/08/31 07:21:23, fedor.indutny wrote:
On 2015/08/31 07:16:39, Michael Lippautz wrote:
> On 2015/08/31 06:45:29, fedor.indutny wrote:
> > On 2015/08/31 06:31:58, Michael Lippautz wrote:
> > > On 2015/08/28 20:49:41, fedor.indutny wrote:
> > All of the allocated buffers are in the new space, may I ask you to
elaborate
> a
> > bit more on why the Scavenge is not possible in this case?
> >
>
> JSTypedArray (the wrapping JS object) usually starts in new space. The
buffers
> (as described above) already live on some external heap.
I didn't mean the contents of ArrayBuffers, they are not direct subject to
scavenge
anyway. What I meant is that all of these GCed buffers was actually in a
new
space,
so the Scavenge should be the fastest way to handle it. Running
incremental
GC,
with no "dead" objects in Old Space seems to be a bit wasteful.
Let's fix some terminology:
* JS object: The frontend JS object.
* Buffer: the buffer holding the actual contents.
The JS object (pretty small) holds a backend, our buffer. The buffer is
externally allocated. The JS object starts in new space and the scavenger IS
already handling it. However, the microbenchmark (that's why it's only a
microbenchmark...) is only allocating these small JS objects on the V8 heap
and
thus only triggering a few scavenges (as you need to hit a limit to trigger
a
scavenge).
The incremental GC you see is triggered because the externally allocated
buffers
(our backends) hit a limit where we need to synchronize with the embedder
(chrome, node, ...) using a GC. The process with synchronizing with the
embedder is complex because we are dealing with a system where multiple GCs
interfere with each other and need to find a global transitive closure (or
at
least a good enough approximation). This used to be a full GC but we
switched to
an incremental one to keep the pause time small.
Is there any way to modify this behavior to make it better for both us and
Chrome?
It would be great to have some sort of overridable callback which will
decide,
what
to do when we external memory limits.
In general you don't know where the JS objects holding external memory live.
Doing a scavenge here only helps your specific microbenchmark (as you know
that
most of you 65k allocated buffers are tied to objects in a tight loop.). You
don't gain general knowledge by doing a scavenge.
Strategies for hitting external allocation limits could also be discussed
with
jochen@. Probably a good idea to get him in the loop once he is back.
Having said all that, allocating a "SlowBuffer" of 65k in a tight loop is
probably not the best use case :)
https://codereview.chromium.org/1316873004/
--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.