On Sep 3, 2008, at 2:23 PM, Attila Szegedi wrote:

> Well, we certainly live in interesting times, at least as far as
> JavaScript runtimes go...

To paraphrase a loan commercial:  "When VMs compete, language  
implementors win."

> TraceMonkey's type specialization seems like something that'd make
> quite a lot of sense. Well, it's trading off memory (multiple versions
> of code), for speed. Basically, if you'd have a simplistic
>
> function add(x,y) { return x + y; }
>
> that's invoked as add(1, 2) then as add(1.1, 3.14), then as add("foo",
> "bar"), you'd end up with three methods on the Java level:
>
> add(int,int)
> add(double,double)
> add(String, String)

That's something you can build in JVM bytecodes using invokedynamic.   
The call site should use not a generic signature but a signature  
which reflects exactly the types known statically to the caller, at  
the time it was byte-compiled.

So "x+1" would issue a call to add(Object,int), but "x+y" might be  
the generic add(Object,Object).

When the call site is linked (in the invokedynamic "bootstrap  
method"), a customized method can be found or created, perhaps by  
adapting a more general method.

The language runtime can also delay customization, choosing to  
collect a runtime type profile, and then later relink the call site  
after a warmup period, to a method (or decision tree of methods)  
which reflects the actual profile.

> Combined with HotSpot's ability to inline through
> invokedynamic, we could probably get the same optimal type narrowed,
> inlined code that TraceMonkey can.

Yes, that's the easier way to get customization, via inlining.  We  
probably need an @Inline annotation (use this Power only for Good).

> It seems to me that type specialization is a more broadly
> applicable, more generic, and thus more powerful concept that allows
> for finer-grained (method level) specializations/optimizations than
> doing it on a level of whole classes.

The V8 technique sounds like a successor to Self's internal classing  
mechanism; it sounds more retroactive.  A key advantage of such  
things is removal of indirections and search.  If you want the "foo"  
slot of an object in a prototype based language, it's better if the  
actual data structures have fewer degrees of freedom and less  
indirections; ideally you use some sort of method caching to link  
quickly to a "foo" method which performs a single indirection to a  
fixed offset.  If the data structure has many degrees of freedom  
(because there is no normalization of reps.) then you have to treat  
the object as a dictionary and search for the foo more often.  You  
might be able to lookup and cache a getter method for obj.foo, but it  
would be even better to have a fixed class for obj, which you test  
once, and use optimized getters and setters (of one or two  
instructions) for all known slots in the fixed class.

-- John

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "JVM 
Languages" group.
To post to this group, send email to jvm-languages@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/jvm-languages?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to