Hi Iurii,

Without making any strict promises, we expect the coarseness of the maglev
compiler to stay roughly at the level it is, with instruction-level
optimisations being implemented in GenerateCode methods or
Maglev/MacroAssembler helper methods.

We're also not anticipating any large changes to the pipeline from the
design doc, which hasn't been updated but is still roughly accurate. The
guiding principle of maglev is to have only a single IR (i.e. minimise
reductions from high-level IR node to low-level IR node, preferring direct
lowering to the latter node) and minimal passes. That said, we do have a
second pass for trying to eliminate conversions around phis, and we _may_
want to look into non-greedy inlining at some point, which would require
lowering calls into an inlined graph and propagating inlined information
through the remaining graph (likely folding away constants, known branches,
known checks, etc.).

Does this answer your questions? Happy to expand on these.

Leszek

On Mon, Sep 4, 2023 at 11:21 AM Iurii Zamiatin <[email protected]>
wrote:

> Hi,
>
> We want to clarify something w.r.t future plans for the Maglev code
> generator.
>
> Maglev right now is a very "coarse-grained" compiler - most bytecode
> operations are lowered to a single Maglev node selected based on feedback
> (design doc for Maglev also refers to these as macro-ops). This is contrast
> with TF (and TS) that is way more "fine-grained" - a single node can be
> lowered to a CFG fragment with many low-level machine operations.
>
> Would it be fair to assume that Maglev will stay "coarse-grained" for the
> foreseeable future? If that is the case, should we assume that the best way
> to port machine rewriter's optimizations/instruction selection patterns
> from TF to Maglev is to add these optimizations to GenerateCode methods?
>
> Finally, what will Maglev pipeline ultimately look like? Is the design
> document for Maglev
> <https://docs.google.com/document/d/13CwgSL4yawxuYg3iNlM-4ZPCB8RgJya6b8H_E2F-Aek>
> up to date with respect to future plans? Will there be any infrastructure
> for folding nodes outside of the graph builder?
>
> Thanks,
> Iurii
>
> --
> --
> v8-dev mailing list
> [email protected]
> http://groups.google.com/group/v8-dev
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/v8-dev/86ab2a6b-c211-4435-b015-6b44bece452cn%40googlegroups.com
> <https://groups.google.com/d/msgid/v8-dev/86ab2a6b-c211-4435-b015-6b44bece452cn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-dev/CAGRskv8np0Zi-gV1C_56-W0JfbQbh2q6xzrAdW4KNY_%2ByphGuA%40mail.gmail.com.

Reply via email to