Because microtasks are always scheduled concurrently, on the same thread, the 
VM needs to complete all the remaining work in the current microtask, as well 
as evaluate subsequent ones in order.

So,in the context of a for-await-of (or otherwise, any loop involving Await), 
at each step through the loop, the current task is suspended and the remaining 
microtasks are performed, before continuing the loop.

With Await, even if data is available synchronously, evaluation is always 
suspended and resumed in a later microtask.

With async genenerator yield statements, you defer evaluation for each value 
yielded to the caller (who themselves will likely await the result, deferring 
evaluation again), and also defer evaluation whenever the generator is resumed.

Basically, async functions and iterables introduce a lot of concurrent tasks 
which require waiting for the entire task queue to finish, and become more 
apparently slow if there are long-running microtasks.

> On Apr 16, 2022, at 1:21 PM, Conrad Buck <[email protected]> wrote:
> 
> Hello,
> I'm working a library for transforming async iterables, and I'm trying to 
> understand where the perf costs of async generators and for await of loops 
> really come from, especially for async iterators whose contents are mostly 
> available synchronously.
> 
> For example I might read a chunk of 65,535 characters from storage. Those 
> characters are available synchronously, and then an await is needed to fetch 
> the 65536th character. My observation is that for await loops introduce 
> significant additional costs to every character, making it unwise to ever 
> represent input as an async iterator of characters. I'm trying to understand 
> where the overhead cost for sync steps comes from and what options may exist 
> to lower it. To demonstrate what I'm talking about I've made a repo 
> containing a variety of relevant benchmarks. I know the language spec is part 
> of the picture, as it dictates that synchronously resolved awaits must 
> participate in the microtask queue. But is this really the source of the 
> cost? If so, is it reasonable to expect that that cost might be optimized 
> away in the future?
> 
> Thanks in advance,
> Conrad
> -- 
> -- 
> v8-dev mailing list
> [email protected]
> http://groups.google.com/group/v8-dev
> --- 
> You received this message because you are subscribed to the Google Groups 
> "v8-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/v8-dev/af3b87d1-33d3-4f97-9b6f-452b5929caa2n%40googlegroups.com.

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-dev/2ECDEC3A-10BF-4A15-9B43-4957DA6E84B8%40igalia.com.

Reply via email to