Hi,

We want to clarify something w.r.t future plans for the Maglev code 
generator.

Maglev right now is a very "coarse-grained" compiler - most bytecode 
operations are lowered to a single Maglev node selected based on feedback 
(design doc for Maglev also refers to these as macro-ops). This is contrast 
with TF (and TS) that is way more "fine-grained" - a single node can be 
lowered to a CFG fragment with many low-level machine operations.

Would it be fair to assume that Maglev will stay "coarse-grained" for the 
foreseeable future? If that is the case, should we assume that the best way 
to port machine rewriter's optimizations/instruction selection patterns 
from TF to Maglev is to add these optimizations to GenerateCode methods?

Finally, what will Maglev pipeline ultimately look like? Is the design 
document for Maglev 
<https://docs.google.com/document/d/13CwgSL4yawxuYg3iNlM-4ZPCB8RgJya6b8H_E2F-Aek>
 
up to date with respect to future plans? Will there be any infrastructure 
for folding nodes outside of the graph builder?

Thanks,
Iurii

-- 
-- 
v8-dev mailing list
v8-dev@googlegroups.com
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-dev/86ab2a6b-c211-4435-b015-6b44bece452cn%40googlegroups.com.

Reply via email to