I’ve been thinking for a while about reworking the way our Java adapters work. 

Currently, Java adapters have a private final MethodHandle field for every 
function they might invoke from the object containing the implementation, and 
they populate those fields in their constructors by creating method handles for 
the script functions, bound to the implementation object. For smaller 
interfaces (e.g. SAMs) this is not a big deal, but for subclasses with many 
methods that could be overwritten, it can become O(n) costly to create and bind 
up to “n” method handles whenever an instance is created.

So, I was thinking of replacing this with a different scheme, where we only 
store the implementation object in a field. Initialization is O(1). We then 
emit the bodies of the adapter functions the same way we do a call from JS, 
e.g. for run():

ALOAD <object>
DUP
INVOKEDYNAMIC dyn:getMethod|getProp|getElem:run
INVOKEDYNAMIC dyn:call

(with the proviso of also handling null/undefined from getMethod to either call 
super or throw an UnsupportedOperationException)

Some of the benefits would include:
smaller adapter classes (always just a single field for the object). FWIW, 
this'd also avoid absurdly extreme situations like this:  
<http://stackoverflow.com/questions/27542742/nashorn-bug-jdk8u40-method-code-too-large/27545699
 
<http://stackoverflow.com/questions/27542742/nashorn-bug-jdk8u40-method-code-too-large/27545699>>
ability to invoke any callable that is member of the object, not just 
ScriptFunction
ability to use type specialization. This is quite a big deal actually. A less 
known fact about Nashorn is that the way we're binding a function often 
includes another level of indirection, as we need to use a “generic invoker” 
with its own call site that handles deoptimizing recompilation etc. So we 
actually box everything going into a ScriptFunction coming through a Java 
adapter. If we did this, we could ensure we emit dyn:call with relevant types 
(int/long/double/Object) and the linkage would also handle the deoptimizing 
recompilation as expected.

However, there’s two possible drawbacks:
we’ll be doing two dynamic operations on every invocation instead of one. 
Should not be a big deal, but we move the lookup from the constructor (one-time 
per constructor) to invocation. Of course, it isn’t any worse than doing the 
same thing from JS code, as JS code also always uses the lookup-then-call idiom 
anyway.
we subtly change the behavior with regard to backwards compatibility. Namely, 
previously we had the functions bound to the instance at construction time and 
now we look them up at every invocation. Previously, changing the functions 
within the implementation object didn’t have any effect, while now reassigning 
a function will change the effect. That is, consider the small example program 
below:

var impl = { run: function() { print("1") } };
var r = new java.lang.Runnable(impl);
r.run();
impl.run = function() { print("2") };
r.run();

Currently, this prints 1 twice as the functions are bound into the adapter 
instance at its construction. If we implemented the change, it’ll print 1 then 
2. That backwards incompatible change in the behavior is really the only reason 
why I feel we need to discuss this. I expect the vast majority of folks use 
object or function literals anyway, so the instance is effectively immutable 
and they wouldn’t be affected, e.g.
    new java.lang.Runnable(function() { print("1") })
or
    new java.lang.Runnable({ run: function() { print("1") } })
which are by far the most typical idioms, would be unaffected. 

What do people think?

Attila.

Reply via email to