Hi,
I have a small suggestion w.r.t. the language specs of Neko:
one of the alternatives for "expr" is:
{ ident1 => expr1 , ident2 => expr2 ... }
if I'm correct, the comma is optional. People may think the "..." is a token as
well (as some languages, such as Lua have that token too)
Then, this would be a more formal and precise way of writing:
"{"( ident => expr [","])* "}"
(this says, zero or more times an identifier/arrow/expression tuple, followed
by an optional comma, all of them within braces)
(furthermore, it would also be better to denote tokens that occur literally, such as the "{", within quotes, in order to be able to differentiate them from
BNF syntax, which for instance also uses "(" for grouping, as above)
regards,
klaas-jan
Nicolas Cannasse wrote:
Hi list,
I'm please to announce the release of Neko 1.3.
You can download it from here : http://nekovm.org/download
Changes are available here : http://nekovm.org/doc/changes/v1.3
Most of the changes are bug fixes and some API fixes or improvements.
One nice feature is the serialization of objects prototypes that can be
customized depending on your language needs :
see http://nekovm.org/doc/view/serialize
This is great news. Thanks for your continued efforts.
Can you give an example situation to explaing this:
"Serialization of bytecode function is possible, but will result in a
runtime exception when deserializing if the function offset in the
bytecode has changed."
ie what can make a function offset in the bytecode change?
Recompilation with extensive changes to the file (ie new functions
added)? Or is any change+recompilation likely to cause an offset
change?
Neko Bytecode is generated "on-the-fly", so any change before the
function code in the neko source might trigger an offset change.
Since I'm quite busy with haXe at the moment - which is relying on
Neko,
I made a bugfix release and then the two promised features ( JIT and
Continuations ) will wait for 1.4 - probably around this summer.
Hmm, what kind of performance improvements do you expect with the JIT?
Can you hint at your design; how do you plan to implement the JITting?
I did some experiments with the JIT already (there is some NekoML
experimental x86 JIT engine in neko/src/jit). It works well.
Once JIT'ed, there is a good speedup since there is no more opcode
fetching and all opcodes will be optimized with their parameter. The
VM support for JIT is nice since you can call back and forth from a
JIT-module to a Bytecode-module. The speedup depends of course of the
application, if it's either VM, GC, or IO bound.
The issue with NekoML JIT is that generating the x86 bytecode from a
big neko module such as the Neko compiler takes a lot of time. So I
will rewrite the whole code emmiter in optimized C.
Nicolas
--
Neko : One VM to run them all
(http://nekovm.org)