Ah,
as always with emails ...
some missunderstandings ...
please see below!
Jochen Hoenicke wrote:
>
> Angelo Schneider writes:
> > Jochen Hoenicke wrote:
> > > http://www.informatik.uni-oldenburg.de/~delwi/japhar/ideas
> > >
> >
> > Hi all,
> >
> > Jochen seems to focus on method calls in JITing bytecode.
> >
> > Why do you focus on that and not on overall byte code ->
> > native code jit compilation?
>
> Because only that part needs to know the internals of the data
> structure. For the remaining instructions like iadd or aload I
> don't need to know anything about how an object is layouted.
Thats clear.
>
> I have nice ideas of generating a jit that will cache most values in
> register, so that load instructions aren't translated at all. This
> should give almost optimal code. I want to code something that
Propably we should exchange each other. I have many, too. I wanted to
wait until some VMS are out to pick one up, because I've not the
time to (design, weel I did ... and) implement one by my own.
> executes java in almost native speed. Some of that may be just a
> dream, maybe it will make the JIT so slow, that it would be better not
> compile the methods, but I think it could be worthwile.
>
> A JIT clearly needs to interface with the JVM in some way, and this
> interface are method calls, field access, instanceof, checkcast and
> array operations. This is why I focus on this part.
>
> The JIT code on that page is more a test, how one can use the data
> structures to efficiently implement the jvm dependent bytecode ops. I
> don't wanted to say, that JITed code must exactly look this way.
>
> > Jochen, do you like to treat jit compiled methods like
> > native (JNI) methods? Isn't there an other way?
>
> My JITed methods will not use the JNI interface, but use the
Year, I meant (missunderstanding above) the signature. Your proposal
for method calls looks very similar to JNI calls. Thus the question.
> internals directly. The reason is speed. I want the JIT compiler be
> as fast as possible, without going through several layers, just to
> call a virtual method or access a field. It should be _faster_ than
> good programmed JNI.
How to achieve that?
Or better: what are the benefits and what are the drawbacks of JNI?
How to tackle that in a JIT?
>
> Look at tya, and you see that it uses a lot of undocumented things to
Sorry no time for that :-( BTW: Once there wath a assembler for Acorn
Archimedes Machines, called TYA (or TLA?) is there a realtion?
> make virtual method calls fast. It _does not_ create valid JNI code,
> and this is the main reason, why it is so fast.
>
> > I would have expected that jitting is quite easy and
> > gives in a first approach sufficient speed improvements if
> > you replace the bytecode by jumps into the JMV code.
> >
> > So the interpreter would have to execute a sequence of
> > JSRs generated by the JIT compiler.
>
> That sounds slow! It means calling a method for every simple bytecode
> instruction, this can't be much faster than interpreting the code. I
It is not.
Asumption: most VMS are about factor 25 to 100 slower than assembler/C
code.
I don't know why, I always found the speed of JAVA incredible low!
Fact: UCSD PAscal used a bytecode very similar to JBC. It was on an 8
bit
6502 processor about factor 4 to 6 slower than assembler. (About 20
times
faster than basic!)
Problem in interpreters is the ratio between cycles which are spend in
fetching and decoding byte-code-ops versus executing byte-code-ops.
If you use a prepared sechence of JSR instructions you are of course
far slower than real assemblercode, but you can get as near as 50/50
ratio
this means 50% of real assembler.
Why?
One jump including return are two instructions, if the subroutine
targeting to
has at least two instructions (well, you need to count cycles, not
instructions to calculate well, but I'm not familar with Intel cpu's)
you
have a ratio of 50/50. If you can increase the ratio into the direction
that
a subroutine is larger (does more) you get faster.
> would even say, it's slower than a good hand optimized interpreter
> written in assembler.
>
> BTW: My suggestion should also give a speedup for interpreted code.
That was clear.
> Currently calling a method from JNI or from the interpreter involves
> several checks, just to make sure, that the method is loaded, isn't
> native and so on, before the interpreter for that method is started.
You have to pack that into a sufficient set up vtab entry!
At first you set the vtab up to perform that checks, after that you
change
the vtab suitable to skip that ckecks and to branche directly into the
interpreter or into the native routine (or a adapter).
>
> I know that this is another big change in the object layout, that can
> break many things. But it is really needed to get a good performance.
Object layout question: how do you achieve that serialized objects look
the same (from VM to VM) regardless how they are kept internaly by the
VM?
>
> Jochen
>
> PS: I sent two mails from my university account
> ([EMAIL PROTECTED]) the last days. They
> both didn't make it to the list. Anyone knows why?
Well, I got two copies of this email. One with size 3k and one with size
4k,
regarding to netscape :-)
Best Regards,
Angelo
---------------------------------------------------------------------
Angelo Schneider OOAD/UML [EMAIL PROTECTED]
Putlitzstr. 24 Patterns/FrameWorks Fon: +49 721 9812465
76137 Karlsruhe C++/JAVA Fax: +49 721 9812467