It looks like it's related to shared objects between isolates. Is there a
newer document than
https://docs.google.com/document/d/18lYuaEsDSudzl2TDu-nc-0sVXW7WTGAs14k64GEhnFg/edit?usp=drivesdk
that describes how this works today? In particular cross-isolate GCs?

On Mon, 13 Jan 2025, 15:25 Jakob Kummerow, <jkumme...@chromium.org> wrote:

> Sounds like a bug, but without more details (or a repro) I don't have a
> more specific guess than that.
>
> If you're desperate, you could try to bisect it (even with a flaky repro).
> Or review the ~500 changes between those branches:
> https://chromium.googlesource.com/v8/v8/+log/branch-heads/13.1..branch-heads/13.2?n=10000
>
>
> On Mon, Jan 13, 2025 at 2:48 PM 'Dan Lapid' via v8-dev <
> v8-dev@googlegroups.com> wrote:
>
>> Hi,
>> In V8 13.2 and 13.3 we see wasm isolates external memory usage blowing up
>> sometimes (up to gigabytes).
>> Under V8 13.1 the same code would never ever use more than 80-100MB
>> The issue doesn't happen every time for the same wasm bytecode. It
>> doesn't even reproduce locally.
>> But some significant percentage of the time it does happen.
>> This has only started happening in 13.2, what are we missing? Should we
>> be enabling/disabling some flags?
>> It also seems that 13.3 is significantly worse in terms of error rate.
>> The problem happens under "--liftoff-only".
>> We use pointer compression but not sandbox.
>> We've tried enabling --turboshaft-wasm in 13.1 and the problem did not
>> reproduce.
>> Has anything changed that we need to adapt to?
>> Would really appreciate your help!
>>
>> --
>>
>

-- 
-- 
v8-dev mailing list
v8-dev@googlegroups.com
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-dev+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/v8-dev/CAHZxHpiibH%2B3_12XTJ%2BEXb%3DfSCa8YfN0Z4zuB6cs_UY1H39Mqg%40mail.gmail.com.

Reply via email to