Hello,
I'm working a library <https://github.com/iter-tools/iter-tools> for 
transforming async iterables, and I'm trying to understand where the perf 
costs of async generators and for await of loops really come from, 
especially for async iterators whose contents are mostly available 
synchronously.

For example I might read a chunk of 65,535 characters from storage. Those 
characters are available synchronously, and then an await is needed to 
fetch the 65536th character. My observation is that for await loops 
introduce significant additional costs to every character, making it unwise 
to ever represent input as an async iterator of characters. I'm trying to 
understand where the overhead cost for sync steps comes from and what 
options may exist to lower it. To demonstrate what I'm talking about I've 
made a repo containing a variety of relevant benchmarks 
<https://github.com/conartist6/async-perf>. I know the language spec is 
part of the picture, as it dictates that synchronously resolved awaits must 
participate in the microtask queue. But is this really the source of the 
cost? If so, is it reasonable to expect that that cost might be optimized 
away in the future?

Thanks in advance,
Conrad

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-dev/af3b87d1-33d3-4f97-9b6f-452b5929caa2n%40googlegroups.com.

Reply via email to