Hi Eric, We took a closer look at how this works in V8+Chromium, and long story short we only have one microtask queue shared between different contexts that can interact with each other. I think this is the only real reasonable solution, if I'm honest, multiple microtask queues interleaving with each other are asking for trouble. If the node module has any dependencies on promises from the outer context, I expect you'll hit the same issue in reverse, and I bet that you could interleave chaining promises from inside and outside to cause this problem up to an arbitrary depth. Could the inner source module post a full event loop task after it finishes its execution, and resolve the execute promise in that? Then any microtasks started by the execute are guaranteed to complete before that task executes.
- Leszek On Mon, Sep 8, 2025 at 7:34 AM Eric Rannaud <[email protected]> wrote: > Hi, > > I have a possible workaround on the Node side: the inner module registers >> a callback with the outer microtask queue, and whenever the outer queue >> completes a checkpoint, the callback tries to checkpoint the inner queue >> too, just in case. This is a little silly as there will typically be >> nothing to drain on the inner queue. At least it's not an expensive >> operation to invoke PerformCheckpoint() on an empty queue. > > > As explained in the Node commit > https://github.com/nodejs/node/pull/59801/commits/9f3b583e7abdbbd3c29a7c5eca04682b2b66993c, > the workaround is only partial: > > "This workaround is not foolproof. While it should work with multiple > nested contexts (not tested), it will not work if, in the default > context A, we have two contexts B and C, each evaluated from A, and we > create promises in B and pass them into context C, where C resolves > them. To handle this more complicated setup, I think we would likely > need the help of V8: we would want to be notified when a task is > enqueued onto the queue of a different context than the current one." > > To flesh this out a little bit, EnqueueMicrotask(handlerContext, task) > would, in pseudo code: > > if (handler_native_context != current_native_context > && handler_native_context_queue != current_native_context_queue) { > invoke EnqueudFromOtherContextCallback on handler_native_context_queue > } > > In Node, this callback would be used to checkpoint the queue in > handlerContext, either right away or at an opportune time. > > I'd imagine the torque version of EnqueueMicrotask() would call into C++ > in the same branch, rather than invoke the callbacks itself. > > Is this new API something that V8 would even consider? > > Thanks, > Eric > > On Fri, Sep 5, 2025 at 9:35 PM Eric Rannaud <[email protected]> > wrote: > >> Hi, >> >> I will submit the following patch to V8 (option B from comment #2), which >> aligns the C++ code with the Torque version. >> >> diff --git a/deps/v8/src/objects/objects.cc >> b/deps/v8/src/objects/objects.cc >> index 3e8fa8d3368f..7b40af964558 100644 >> --- a/deps/v8/src/objects/objects.cc >> +++ b/deps/v8/src/objects/objects.cc >> @@ -5166,13 +5166,13 @@ MaybeHandle<Object> >> JSPromise::Resolve(DirectHandle<JSPromise> promise, >> MaybeDirectHandle<Object> then; >> >> // Make sure a lookup of "then" on any JSPromise whose [[Prototype]] >> is the >> - // initial %PromisePrototype% yields the initial method. In addition >> this >> - // protector also guards the negative lookup of "then" on the intrinsic >> - // %ObjectPrototype%, meaning that such lookups are guaranteed to yield >> - // undefined without triggering any side-effects. >> + // initial %PromisePrototype% (in the current native context) yields >> the >> + // initial method. In addition this protector also guards the negative >> lookup >> + // of "then" on the intrinsic %ObjectPrototype%, meaning that such >> lookups are >> + // guaranteed to yield undefined without triggering any side-effects. >> if (IsJSPromise(*resolution_recv) && >> - resolution_recv->map()->prototype()->map()->instance_type() == >> - JS_PROMISE_PROTOTYPE_TYPE && >> + resolution_recv->map()->prototype()->SafeEquals( >> + *isolate->promise_prototype()) && >> Protectors::IsPromiseThenLookupChainIntact(isolate)) { >> // We can skip the "then" lookup on {resolution} if its [[Prototype]] >> // is the (initial) Promise.prototype and the Promise#then protector >> >> I have a possible workaround on the Node side: the inner module registers >> a callback with the outer microtask queue, and whenever the outer queue >> completes a checkpoint, the callback tries to checkpoint the inner queue >> too, just in case. This is a little silly as there will typically be >> nothing to drain on the inner queue. At least it's not an expensive >> operation to invoke PerformCheckpoint() on an empty queue. >> >> Thanks, >> Eric >> >> On Fri, Sep 5, 2025 at 12:06 PM Eric Rannaud <[email protected]> >> wrote: >> >>> Hi Leszek, >>> >>> On Fri, Sep 5, 2025 at 7:36 AM Leszek Swirski <[email protected]> >>> wrote: >>> >>>> It sounds like you've found a bug in our C++ implementation which is >>>> incompatible with the spec; I'd have to read into the exact details but it >>>> sounds plausible. This is great, and we should fix this, however your >>>> subsequent comments sound like they want the broken behaviour? >>>> >>> >>> Something in between. While the C++ code is closer, I think we'd want a >>> variant: we want the C++ behavior only if the resolution is an >>> already-fulfilled promise, otherwise we want the Torque behavior. >>> >>> >>>> In general, for us, spec is non-negotiable (the few times we do break >>>> spec is for web compatibility or some serious performance concern), so if >>>> you think a break from spec behaviour would be useful for Node, then you'd >>>> want to make that a spec proposal. >>>> >>> >>> Understood. >>> >>> The nested microtask queues Node is using sound a bit broken, since it >>>> sounds like the microtask queue is only drained once? If I understand the >>>> issue correctly, then the right fix would be for Node to always drain the >>>> inner microtask queue after something executes in the inner realm, or, I >>>> guess, patch the native context after draining it to refer to the outer >>>> microtask queue. >>>> >>> >>> I did consider patching the context to point to the outer queue after >>> evaluate(), or just set the queue to null, but in typical node:vm usage >>> this context object may be shared with other (inner) modules that may be >>> evaluated later, so I don't think we can actually modify it. >>> >>> Every time some code awaits on the already-fulfilled promise returned by >>> evaluate(), a new (trivial) task will appear on the inner microtask queue, >>> otherwise (long) inactive. >>> >>> The issue for Node is that there is no way of knowing that the inner >>> microtask queue should be drained. This (trivial) task gets enqueued after >>> inner_module.evaluate() returns, at a point where we are done with >>> inner_module, and I'm not sure where there would be a reasonable place to >>> check "what if the queue for this other context we're not using anymore >>> needs draining?" >>> >>> Another way to describe the issue is that with two contexts A and B, >>> with queues QA and QB, context B is effectively waiting on something to >>> happen in context A (the trivial thenable job task), but while there is a >>> task enqueued on QA, there are no tasks enqueued on QB to indicate that the >>> (outer) module B has more work to do, leading to the evaluation of module B >>> to just terminate early. The lack of dependency tracking between contexts B >>> and A is what is broken in the spec, I think. >>> >>> Whenever context B enqueues something onto QA, it is possible (but not >>> guaranteed) that A will in turn enqueue something back onto QB. So, context >>> B should be aware of that possibility and consider itself waiting on a QA >>> checkpoint. It's only after a QA checkpoint is performed that context B can >>> check if its queue is effectively empty or not. >>> >>> >>> Possibly, In Node or in v8::SourceTextModule, after finding that the >>> current queue is empty, before terminating the evaluation of the current >>> module, we could check if any other "related" microtask queues are >>> non-empty and drain them, before checking if doing this added any tasks to >>> our current queue. >>> >>> >>> Or, after all, it may be worth considering another option I included in >>> comment #3: "In V8, add a callback to the public API MicrotaskQueue to be >>> notified, with policy kExplicit: either when a task is enqueued, or more >>> efficiently, when there are enqueued tasks in strategic places (maybe in >>> the same places where, with policy kAuto, a checkpoint may be performed). >>> This would let Node know that the qeueue associated with an >>> already-evaluated module needs to be drained again. I have not tried this, >>> but it seems very clunky." >>> >>> This callback could be associated with the queue: >>> OnMicrotaskEnqueue(microtask_queue). In Node, we would maintain enough >>> state to know if we've entered a context associated with this queue. In the >>> callback: if the queue is in an entered context, we know that the queue >>> will get drained later, we have nothing to do; but if this queue is not >>> associated with an entered context, we immediately drain the queue. >>> >>> Or this callback could be associated with the native context: >>> OnMicrotaskEnqueue(native_context). The Node implementation of this >>> callback would be simpler, as we only need to check if native_context >>> matches the isolate current native context. >>> >>> Thanks, >>> Eric >>> >> -- > -- > v8-dev mailing list > [email protected] > http://groups.google.com/group/v8-dev > --- > You received this message because you are subscribed to the Google Groups > "v8-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion visit > https://groups.google.com/d/msgid/v8-dev/CA%2BzRj8W8U7t_fvHgpMROdmO31OTiOddLM9gfi6JNN6m9wkV1EA%40mail.gmail.com > <https://groups.google.com/d/msgid/v8-dev/CA%2BzRj8W8U7t_fvHgpMROdmO31OTiOddLM9gfi6JNN6m9wkV1EA%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- -- v8-dev mailing list [email protected] http://groups.google.com/group/v8-dev --- You received this message because you are subscribed to the Google Groups "v8-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/v8-dev/CAGxd1t_%3DwKTE4BdeFZgBGnD4_rPBoO%3Df_KvVabwQ%3DbTU5Ocj6g%40mail.gmail.com.
