On 5/18/05, Geir Magnusson Jr. <[EMAIL PROTECTED]> wrote:
> 
> On May 18, 2005, at 9:36 AM, Steve Blackburn wrote:
> 
> > This subject has been covered in detail at least twice already.
> >
> > There is no need for any function call on the fast path of the
> > allocation sequence.  In a Java in Java VM the allocation sequence
> > is inlined into the user code.  This has considerable advantages
> > over "a few lines of assembler".  Aside from the obvious advantage
> > of not having to express your allocator in assembler, using Java
> > also compiles to better code since the code can be optimized in
> > context (register allocation, constant folding etc), producing much
> > better code than hand crafted assembler.
> >
> > However this is small fry compared to the importance of compiling
> > write barriers correctly (barriers are used by most high
> > performance GCs and are executed far more frequently than
> > allocations).  The same argument applies though.  The barrier
> > expressed in Java is inlined insitu, and the optimizing compiler is
> > able to optimize in context.
> >
> > Modularization does not imply any of this.
> 
> I assume you mean that Modularization is orthogonal to this - that
> they are independent aspects?

Modularization always interacts with performance.  I believe empirical
evidence suggests that the performance impact of splitting out the JIT
and GC as separate modules is in the noise.  I suspect the internal
JVM support for java.lang.Thread, java.lang.Object can also be split
off as a separate module with very little performance impact.  It
might even be possible to include java.util.concurrent support in the
same threads module.

Elaborating on a previous comment about inlining allocation/write
barrier sequences.  First design step: the GC team hand optimizes an
assembly sequence while making sure the functionality is absolutely
correct.  Second step: the JIT team blindly inlines the GC team's
assembly code and starts doing performance analysis.  Third step: the
JIT team integrates the inline sequence(s) into the IR so that all the
optimizations can be performed.  Perhaps these steps are the same for
both a JVM written in Java as well as C/C++.

I am curious if a JVM written in Java must break type-safety.   Does
anyone know? For example, the "new" bytecode will need to manipulate
(gasp!) raw "C" pointers.  In specific, Java code will need to
scribble on free memory to slice off "X" untyped bytes and return a
raw pointer to the base of chunk of memory.  Then the java code will
need to use the raw pointer to install stuff like a vtable pointer. 
Once the object is setup, the Java code can revert to running code
that can actually be verified.  Also does anyone know the current
state of research on formally proving a GC written in Java is
type-safe?
 
> 
> geir
> 
> >
> > --Steve
> >
> >
> > Weldon Washburn wrote:
> >
> >
> >> On 5/18/05, David Griffiths <[EMAIL PROTECTED]> wrote:
> >>
> >>
> >>> I think it's too slow to have the overhead of a function call for
> >>> every object allocation. This is the cost of modularization. I doubt
> >>> any of the mainstream JVMs you are competing with do this.
> >>>
> >>>
> >> Yes.  I agree.  A clean interface would have a function call for
> >> every
> >> object allocation.  However if the allocation code itself is only a
> >> few lines of assemby, it can be inlined by the JIT.  Using a moving
> >> GC, it is possible to get the allocation sequence down to a bump the
> >> pointer and a compare to see if you have run off the end of the
> >> allocation area.
> >>
> >>
> >
> >
> 
> --
> Geir Magnusson Jr                                  +1-203-665-6437
> [EMAIL PROTECTED]
> 
> 
>

Reply via email to