Fedor,

Getting there! Now we can actually see what's happening :)

The code will get much better (and I suspect much faster). The main idea is that we only process XXX_scavenge_ maps during scavenges and everything during the
full GC.

Let me also note that V8’s new generation (scavenger) is a semi-space GC where
we copy once before we promote to old space. As a result we should not move
references from the scavenge map to the full map after the
FreeDeadArrayBuffers() call, but only on the PromoteArrayBuffer() call.

So what we really want is
  live_array_buffers_
  not_yet_discovered_array_buffers_
and
  live_array_buffers_for_scavenge_
  not_yet_discovered_array_for_buffers_scavenge_
to be completely disjoint.

Then we can process only XXX_scavenge_ during a scavenge and everything
(including XXX_scavenge_) during a full GC. Rest of the comments inlined.

Can you then post the results of the changed behavior vs tip-of-tree ?

Also, please remove Sven from the review field (R=).

Thanks a lot!

-Michael



https://codereview.chromium.org/1316873004/diff/20001/src/heap/heap.cc
File src/heap/heap.cc (right):

https://codereview.chromium.org/1316873004/diff/20001/src/heap/heap.cc#newcode1752
src/heap/heap.cc:1752: void Heap::UnregisterArrayBuffer(bool
in_new_space, void* data) {
We can simplify the main part a lot:

std::map<void*, size_t>*live_buffers =
    in_new_space ? &live_array_buffers_for_scavenge_ :
&live_array_buffers_;
std::map<void*, size_t>* not_yet_discovered_buffers =
    in_new_space ? &not_yet_discovered_buffers_for_scavenge_ :
&not_yet_discovered_buffers_;
DCHECK(live_array_buffers->count(data) > 0);
live_array_buffers->erase(data);
not_yet_discovered_array_buffers->erase(data);

https://codereview.chromium.org/1316873004/diff/20001/src/heap/heap.cc#newcode1761
src/heap/heap.cc:1761: not_yet_discovered_array_buffers_.erase(data);
This  .erase() will not be part of the method as we keep the sets
disjoint.

https://codereview.chromium.org/1316873004/diff/20001/src/heap/heap.cc#newcode1784
src/heap/heap.cc:1784: void Heap::FreeDeadArrayBuffers(bool
from_scavenge) {
Since we keep the sets disjoint, we only need to visit XXX_scavenge_ for
the new space part and in the other case we visit all of them:

for buffer in not_yet_discovered_buffers_scavenge:
  isolate.free(buffer)
  freed_memory += …
  live_array_buffers_for_scavenge_.clear(buffer)
if !from_scavenge:
  for buffer in not_yet_discovered_buffers_:
    isolate.free(buffer)
    freed_memory += …
    live_array_buffers_.clear(buffer)


not_yet_discovered_buffers_for_scavenge_ =
live_array_buffers_for_scavenge_
if !from_scavenge:
  not_yet_discovered_buffers_ = live_array_buffers_

https://codereview.chromium.org/1316873004/diff/20001/src/heap/heap.cc#newcode1828
src/heap/heap.cc:1828: void Heap::TearDownArrayBuffers() {
nit: While practically not relevant, we should still tear down gentle
and record the freed memory (like in FreeDeadArrayBuffers).

e.g.
  freed_memory += buffer.second;
throughout the loop

and
  AdjustAmountOfExternalAllocatedMemory(freed_memory);
in the end.

https://codereview.chromium.org/1316873004/

--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to