Oli, > Yes, by changing v8_enable_pointer_compression_shared_cage you essentially get one r/o heap per isolate group. Having different r/o heaps in that scenario is probably fine, but certainly untested from our side. At least test it with a debug build to ensure the objects in r/o heap we care about are where we expect them to be.
Unfortunately setting it false crashed mksnapshot during the build for the version i'm currently using so it's a no go for that in fact it's been crashing probably since 12.5.97 since i had to go back that far to get a build to work with the flag set to false. On Thursday, January 9, 2025 at 9:09:08 PM UTC+8 [email protected] wrote: > Ronald: > > Note it's the v8_enable_pointer_compression_shared_cage that is the > problem when building. By turning it off it would switch the idolate group > from a global one to one for each isolate thus fixing the checksum error > since the read only heap seems to be tied to the group hoomever doing so > caused mksnapshot to crash during the build. > > Yes, by changing v8_enable_pointer_compression_shared_cage you essentially > get one r/o heap per isolate group. Having different r/o heaps in that > scenario is probably fine, but certainly untested from our side. At least > test it with a debug build to ensure the objects in r/o heap we care about > are where we expect them to be. > > Erik: > > How is the RO heap created? Does the deserialization code create the RO > heap on the first deserialization, and then use it on subsequent > deserializations? > > Exactly. See `Isolate::SetUpFromReadOnlyArtifacts`. > > *oli > > > On Wed, Jan 8, 2025 at 11:04 AM Erik Corry <[email protected]> wrote: > >> On Tue, Jan 7, 2025 at 6:56 PM Olivier Flückiger <[email protected]> >> wrote: >> >>> Hi Ronald, >>> >>> first of all, indeed there are many flags around compression and >>> snapshots and picking random combinations is likely to yield a nonsensical >>> or unsupported build. Unfortunately there is enough churn that it's hard to >>> make definitive statements which combos are supported. Could you post your >>> complete set of gn_args you are trying to build with? >>> >>> The shared read only heap cannot be disabled anymore. In other words on >>> compressed builds you have one r/o heap per isolate group (i.e., per cage) >>> and as you noticed we assume that they are all identical. On non-compressed >>> builds there is one r/o heap, period. >>> >> >> So the assumption is that only one snapshot is deserialized into a given >> isolate group, but it can be deserialized several times to create separate >> contexts? >> >> How is the RO heap created? Does the deserialization code create the RO >> heap on the first deserialization, and then use it on subsequent >> deserializations? >> >> >>> It might be possible to load different context snapshots based on the >>> same r/o snapshot (no guarantees though). But, as you noticed by default we >>> started promoting user objects into the r/o heap in the SnapshotCreator. >>> This will cause the different snapshots to have divergent r/o heaps, which >>> violates an invariant. This happens at [0] and is as noticed disabled with >>> --stress-snapshot. If it helps we can certainly add a flag to mksnapshot >>> that disables r/o promotion. >>> >>> *oli >>> >>> [0] >>> https://source.chromium.org/chromium/chromium/src/+/main:v8/src/snapshot/snapshot.cc;drc=da761e23adc8a7ee793546612237380401e52e6f;l=1111 >>> >>> >>> On Wed, Jan 1, 2025 at 12:12 PM 'Ronald Fenner' via v8-dev < >>> [email protected]> wrote: >>> >>>> Unfortunately if you use that option your risking corruption of the >>>> shared space. In normal circumstances you'd probably only load up one >>>> snapshot for an apps run but in the case of tests that is out the door and >>>> with my project you would be able to generate an app that has several >>>> isolates snapshoted so that breaks that assumption. >>>> >>>> I wouldn't say it's half baked it's just hidden and most likely fits >>>> chromium's use case. For embedders other than Chromium it should probably >>>> be exposed for us to have more control over it. >>>> >>>> I could get around the global shared space if the build arg >>>> v8_enable_pointer_compression_shared_cage didn't cause mksnapshot to crash >>>> when set to false. >>>> >>>> Not sure if the v8 team has a test that can generate different argument >>>> sets for the CI but if not they probably should since if you add an option >>>> and it will only work with enabled or disabled then the feature that add >>>> it >>>> needs to be fixed to work both ways. Course other features that then >>>> depend >>>> on another one to be enabled should be disabled when it not, which is some >>>> of what the logic in the build file tries to do. >>>> >>>> On Wednesday, January 1, 2025 at 6:37:17 PM UTC+8 [email protected] >>>> wrote: >>>> >>>>> On Wed, Jan 1, 2025 at 8:08 AM 'Ronald Fenner' via v8-dev >>>>> <[email protected]> wrote: >>>>> > >>>>> > I'm running into a checksum error when trying to load a custom >>>>> snapshot during a unit test where it was created. >>>>> > Specifically this error >>>>> > # Fatal error in ../../src/heap/read-only-spaces.cc, line 96 >>>>> > # Check failed: read_only_blob_checksum_ == snapshot_checksum >>>>> (<unprintable> vs. 2723829699 <(272)%20382-9699>). >>>>> > >>>>> > I've dug into and found that it appears that an IsolateGroup is >>>>> automatically created and any future isolates are loaded into this group >>>>> fixing the readonly shared space checksum with the first startup data >>>>> used. >>>>> > >>>>> > Subsequently when I tried to load the custom snapshot it's isolate >>>>> gets put in this group and it's checksum no longer matches. >>>>> > >>>>> > I was able to substitute the custom startup, which for this test is >>>>> just a recreation of the v8 snapshot no extras loaded into it, in as the >>>>> v8 >>>>> startup blob and my core tests using it passed with no issue. >>>>> > >>>>> > I've tried to disable the Sandbox same error. >>>>> > Tried to disable the shared ro heap but this caused a torque static >>>>> assertion about builtins >>>>> > >>>>> > Disabling both sandbox and shared ro heap cause mksnapshot to crash >>>>> with >>>>> > # Fatal error in ../../src/diagnostics/objects-debug.cc, line 673 >>>>> > # Check failed: HeapLayout::InAnySharedSpace(*this). >>>>> > >>>>> > I also tried to disable the shared pointer compression cage but this >>>>> brought back mksnapshot crashing as in my other thread. >>>>> > >>>>> > Unfortunately other than creating a whole other app to run the unit >>>>> test's checking the snapshot worked there doesn't seem to be a way around >>>>> this as the IsolateGroup is not exposed to the public API for embedders >>>>> and >>>>> there seems to be no way to create a new one and associate it with an >>>>> isolate during creation. >>>>> > >>>>> > I'm currently using 13.1.201.19 which is the currently stable >>>>> release shipped in the current version of Chromium. >>>>> > This did work in 12.4.254.15 which is what i upgraded from and I'm >>>>> pretty sure that the shared read only heap is a new feature added since >>>>> then. >>>>> > >>>>> > It seems a little odd to assume that the startup data passed during >>>>> isolate creation since it's a parameter of the create params wouldn't >>>>> change though I know you all mainly base your use cases on Chromiums use >>>>> of >>>>> V8 and not what an embedder might do. >>>>> > >>>>> > Perhaps a fix is just to calculate a checksum when first creating >>>>> the isolate for the startup data and if it matches use a group created >>>>> for >>>>> it. otherwise create a new one for that snapshot data, that or expose the >>>>> IsolateGroup for the embedder to be able to create and associate with an >>>>> isolate at creation like the cppheap. >>>>> >>>>> I've been hitting that first check for some months now (plus several >>>>> others, the IsolateGroup work seems really half-baked) but fortunately >>>>> it only happens in debug builds, and you can circumvent it by passing >>>>> in --stress_snapshot. >>>>> >>>> -- >>>> -- >>>> v8-dev mailing list >>>> [email protected] >>>> http://groups.google.com/group/v8-dev >>>> --- >>>> You received this message because you are subscribed to the Google >>>> Groups "v8-dev" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To view this discussion visit >>>> https://groups.google.com/d/msgid/v8-dev/9a71c5a4-be10-4b3d-86d4-26ab13166d6an%40googlegroups.com >>>> >>>> <https://groups.google.com/d/msgid/v8-dev/9a71c5a4-be10-4b3d-86d4-26ab13166d6an%40googlegroups.com?utm_medium=email&utm_source=footer> >>>> . >>>> >>> -- >>> -- >>> v8-dev mailing list >>> [email protected] >>> http://groups.google.com/group/v8-dev >>> --- >>> You received this message because you are subscribed to the Google >>> Groups "v8-dev" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To view this discussion visit >>> https://groups.google.com/d/msgid/v8-dev/CAPfE2j3TcFFFpHW%3Dx%3DdQc97_ivUvtDM3-_z7Si%3DoFVbZOhs2HQ%40mail.gmail.com >>> >>> <https://groups.google.com/d/msgid/v8-dev/CAPfE2j3TcFFFpHW%3Dx%3DdQc97_ivUvtDM3-_z7Si%3DoFVbZOhs2HQ%40mail.gmail.com?utm_medium=email&utm_source=footer> >>> . >>> >> -- >> -- >> v8-dev mailing list >> [email protected] >> http://groups.google.com/group/v8-dev >> --- >> You received this message because you are subscribed to the Google Groups >> "v8-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> > To view this discussion visit >> https://groups.google.com/d/msgid/v8-dev/CAHZxHpj%3D9WC%2BkGOW820LpcgNxaybxF3-qe2L%2BVH41TSgtYEr%3Dw%40mail.gmail.com >> >> <https://groups.google.com/d/msgid/v8-dev/CAHZxHpj%3D9WC%2BkGOW820LpcgNxaybxF3-qe2L%2BVH41TSgtYEr%3Dw%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> > -- -- v8-dev mailing list [email protected] http://groups.google.com/group/v8-dev --- You received this message because you are subscribed to the Google Groups "v8-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/v8-dev/5e6f5751-d99f-45a7-bca5-f51f49cba48bn%40googlegroups.com.
