On 2015/08/31 07:16:39, Michael Lippautz wrote:
On 2015/08/31 06:45:29, fedor.indutny wrote:
> On 2015/08/31 06:31:58, Michael Lippautz wrote:
> > On 2015/08/28 20:49:41, fedor.indutny wrote:
> > > I guess making them a scavenges might be the next step to improve the
> > > performance.
> > >
> >
> > I don't think there's much to improve on here with the current
architecture.
> >
> > The buffers in question are allocated externally (while the wrapping
object
> > JSArrayBuffer is only conditionally allocated externally). We start an
> > incremental GC (see api.cc AdjustAmountOfExternalAllocatedMemory) once we
hit
> > the limit. For the scavenge map you need the scavenge information and for
the
> > full map you need a full transitive closure.
>
> We are allocating `kInternalized` ArrayBuffers in node. Is there any other
way
> to allocate them "internally"?
>

With kExternalized we do not explicitly keep track of the buffers we are
handed
when creating a JSArrayBuffer object. We just ignore the buffer contents wrt.
to
garbage collection. With kInternalized we use the functions we just modified
to
keep track of them.

As far as I see the api though, in both cases we actually use a special
allocator (v8::ArrayBuffer::Allocator array_buffer_allocator) in the isolate
to
allocate those buffers. This seems to be external at all times for now, i.e., even for d8 we use a so-called ShellArrayBufferAllocator that essentially just wraps malloc. jochen@(currently OOO) should know why the buffers are allocated
externally at all times.

Great, would love to learn more about it! Thanks!


> All of the allocated buffers are in the new space, may I ask you to
elaborate
a
> bit more on why the Scavenge is not possible in this case?
>

JSTypedArray (the wrapping JS object) usually starts in new space. The buffers
(as described above) already live on some external heap.

I didn't mean the contents of ArrayBuffers, they are not direct subject to
scavenge
anyway. What I meant is that all of these GCed buffers was actually in a new
space,
so the Scavenge should be the fastest way to handle it. Running incremental GC,
with no "dead" objects in Old Space seems to be a bit wasteful.

Is there any way to modify this behavior to make it better for both us and
Chrome?
It would be great to have some sort of overridable callback which will decide,
what
to do when we external memory limits.

https://codereview.chromium.org/1316873004/

--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to