>> Since I'm quite busy with haXe at the moment - which is relying on Neko,
>> I made a bugfix release and then the two promised features ( JIT and
>> Continuations ) will wait for 1.4 - probably around this summer.
>>
> Hmm, what kind of performance improvements do you expect with the JIT?
> Can you hint at your design; how do you plan to implement the JITting?
I did some experiments with the JIT already (there is some NekoML
experimental x86 JIT engine in neko/src/jit). It works well.
Once JIT'ed, there is a good speedup since there is no more opcode
fetching and all opcodes will be optimized with their parameter. The VM
support for JIT is nice since you can call back and forth from a
JIT-module to a Bytecode-module. The speedup depends of course of the
application, if it's either VM, GC, or IO bound.
The issue with NekoML JIT is that generating the x86 bytecode from a big
neko module such as the Neko compiler takes a lot of time. So I will
rewrite the whole code emmiter in optimized C.
This sounds great and I looked at the todo where a speedup of 20-30 is
indicated with the jitted code. Very impressive. I will definetely
take up my work on a ruby2neko compiler this summer; neko seems to be
an excellent target platform. And with continuations in place I see no
more obstacles to supporting full Ruby (will need to check neko's
exception handling more though).
Also will the jit be x86 only? (With Apple also going for x86 nowadays
I guess this is not a crucial question anymore but good to know)
Thanks,
Robert
--
Neko : One VM to run them all
(http://nekovm.org)