I don't have an opinion on the API itself.

I do have a concern about the motivating use case: the number of allocated
bytes for a given workload/testcase is not fixed, so I suspect your
experience with future regression tests based on such infrastructure might
be disappointing. In particular, the sizes of objects can grow or shrink
over time when we add or remove internal fields (or change the sizes of all
fields, such as by turning pointer compression on or off). Also, when
functions get optimized, allocations might be optimized out, e.g. by using
unboxed ints/floats instead of HeapNumbers, or by constant-folding
operations, or by escape-analyzing entire objects; and various
optimization/inlining heuristics that affect this keep changing over time.
Since optimization happens as async background tasks, even between two runs
of the same V8 binary you may see different behavior when the optimized
code is available a little earlier or a little later. With that in mind,
would such a new API actually satisfy your needs?


On Thu, Sep 25, 2025 at 9:00 AM Caio Lima <[email protected]> wrote:

> Hi everyone,
>
> The goal of this thread is to start a discussion about an extension on
> HeapStatistics API to allow us to get a counter for the total allocation
> that has happened in an Isolate since its creation, avoiding double
> counting allocation for objects that get promoted from young to old
> generations.
>
> The motivation here is to allow a more reliable way to create regression
> tests for compilers targeting WASM/JS, where it will be possible to
> properly calculate how much memory was allocated until a point in time. IIUC,
> the current API provides a HeapProfiler that would do some sort of this
> counting, however it isn’t the best fit to the kind of regressions tests,
> because we want as minimal overhead as possible, and the Allocation
> Observer used by HeapProfiler imposes some overhead. Given that, we have a
> proposal on how we could get a minimal overhead to count total allocation
> below.
>
> Proposal:
>
>    -
>
>    Add a new uint64_t field to HeapStatistics (tentatively named
>    total_allocated_bytes_).
>    -
>
>    Expose this via uint64_t total_allocated_bytes() in the public API.
>    -
>
>    When Isolate::GetHeapStatistics is called, it would return the total
>    number of bytes allocated since isolate creation.
>    -
>
>    Counting would be implemented with minimal overhead by:
>    -
>
>       Incrementing the global counter by the number of bytes allocated in
>       a LinearAllocationArea (LAA) when a mutator releases the LAA back to the
>       collector.
>       -
>
>       For spaces without LAA, such as Large Object Space, incrementing
>       the counter when allocation occurs.
>
>
> We’d like to hear your thoughts on this proposal:
>
>
>    -
>
>    Does this approach make sense from a design perspective?
>    -
>
>    Are there concerns about overhead, naming, or API surface?
>    -
>
>    What would be required to get such an extension accepted upstream?
>
>
> Thanks in advance,
>
> Caio Lima.
>
> --
> --
> v8-dev mailing list
> [email protected]
> http://groups.google.com/group/v8-dev
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion visit
> https://groups.google.com/d/msgid/v8-dev/62c62b4a-3918-4215-994d-daa3089c1f80n%40googlegroups.com
> <https://groups.google.com/d/msgid/v8-dev/62c62b4a-3918-4215-994d-daa3089c1f80n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/v8-dev/CAKSzg3QEcGFjZJzofJu0%3Dkh9z0o5dzTj8%3DH8RsupKhNHuiyxWA%40mail.gmail.com.

Reply via email to