On Apr 30, 2008, at 6:37 AM, John Wilson wrote:

> On 4/30/08, Attila Szegedi <[EMAIL PROTECTED]> wrote:
>>
>>  On 2008.04.30., at 11:49, Jochen Theodorou wrote:
>>
>>> I don't see how this will help me in Groovy. We use the Java  
>>> types, so
>>> there is no need to represent a 20 bit integer.
>>
>>
>> It doesn't help you now. It'd help you in a new VM that has this
>>  trick :-)

Right, thanks Attila.  Fixnums are under the covers.  The best  
optimizations (usually) are.

In general, new bytecodes are not necessary for performance, since  
compilers are good at treating well-known static methods as macro- 
instructions.  When bytecode changes are justified, it's because  
existing workarounds have high simulation overheads in time and  
bytecode space.

For a small but representative example, consider ldc of a class  
constant.  The old JDK 1.1 code uses a static semi-anonymous variable  
and fast-slow CFG diamond in the bytecodes.  This is verbose, and the  
verbosity makes it harder for the JIT to see what is going on.  The  
standard code today is an ldc CONSTANT_Class, which every JIT can  
robustly optimize.

> I'm rather unsure about the value of making changes like this to the
> JVM. The timescale from now to when they become useable is rather long
> (2-3 years to get into a released JVM then another 2-3 years before I
> can rely on most of my target audience having the JVM in production).

That's how the JVM game has been played for 10 years now:  Major  
optimizations like loop transformation or compressed oops or fixnums  
or escape analysis take years to work through the pipeline.  Over  
time, JVM performance increases as new features deploy, each one  
after its own gestation period.  Depending on the time scales your  
project contemplates, it may or may not be useful to know what JVM  
optimizations are in the pipeline.  It is useful for language  
implementors to know the directions JVM implementors are taking on  
problems they care about, and useful for JVM implementors to talk  
with their users about what optimizations are on the table.  Today  
we're talking about fixnums.  A year or two ago we were talking about  
other optimizations now delivered.

> been solved another way. (I'm also slightly terrified of building
> something based on address alignment and assuming that the behaviour
> of the hardware will be the same in 5-10 years time).

There have almost always been slack bits in machine addresses, and  
will certainly be slack bits in the future 64-bit world.  It's a 40- 
year-old tactic with plenty of life left in it.  Check my blog entry;  
it assumes only the presence of slack bits, not any particular  
position of them in the address word.  In a pinch, you could  
repurpose any unmapped portion of the address space as code points  
for fixnums.  Messy, but these optimizations are decades mature, and  
today a dozen CPU cycles can be faster than waiting for your memory  
hierarchy to cough up the data.

> Whist Method Handles are quite interesting I'd rather see effort being
> expended in making java.lang.reflect.Method faster and more useful (we
> discussed allowing downcasting to types which avoided the meed to box
> the parameters, and unbox the result at one point on this list). If
> only on the basis that this could be in the next version of the JVM as
> an incremental improvement to refection rather than as a change to
> support "dynamic languages".

Those discussions contributed integrally to the method handle design;  
thank you.  MethodHandle is the downcast type that supports the  
direct, unboxed call.

The present unboxed alternative (many interfaces + many classes)  
scales poorly; see Gilad's slide #20 in:
   http://blogs.sun.com/roller/resources/gbracha/JAOO2005.pdf

Even if such an interface/class pair were optimized down to 100 words  
it would still be 10x more expensive in space than a method handle,  
and no faster in call sequence, than a method handle.  (Data point:   
Pack200 shrinks a one-method anonymous adapter class to about 90  
bytes.  That's a robust lower bound; the JVM has to expand it before  
it's usable.)  Space still matters these days, because a computer  
runs in its cache.

Method handles will make reflection faster, and also less necessary.   
Implementors will take their pick between compatibility and bleeding  
edge optimization, or take both with a switch setting at startup.

Best wishes,
-- John

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "JVM 
Languages" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/jvm-languages?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to