Thank you for this, it's still a bit hard to follow but I will give it a try.

I should state that I entirely agree with you regarding the ambiguity of coercion, but I think the problem is minimized when you document the limitations. You can't support everything--by definition--but you can allow for some common shortcuts. And recall Rick's comment: it's behavior that users simply expect. Worse still, the current behavior would not only seem "broken" for some, but it's also very hard to debug: a concatenated string instead of an array of arguments? It took me an hour of poking around to understand what was going wrong.

Furthermore, a more general point: It's crucial for the Nashorn project not to forget the vast amount of Rhino code out there. Even when Rhino might seem quirky, it would still be very good to support these quirks, even with an optional flag. I have tens of thousands of lines of code for Rhino, and it's daunting to have to find and catch all these calls, especially when often these arrays are constructed dynamically.

Nashorn has an a huge legacy in Rhino: but you should treat it as a benefit, not a problem!

On 10/09/2013 04:50 PM, Attila Szegedi wrote:
Well, the API of GuardingTypeConverterFactory should be fairly straightforward: you're 
asked for a conversion from a type to a type, you're supposed to return a conditional 
invocation (invocation with a guard). What you'd do is react to any calls where 
to.isArray() == false with null, and to.isArray() == true with a newly created 
GuardedInvocation where the guard is a method handle for "object instance of 
ScriptObject", and invocation is basically a method handle to NativeJava.to() with 
the appropriate type parameter bound.

Note that this'll also incidentally work for conversion to multidimensional 
array types (e.g. int[][] parameter target passed a [[1,2][3,4]]) as in the 
Java.to() implementation elements are converted elementwise to the target 
component type using the full type conversion machinery of Dynalink, consulting 
all the discovered type converter factories.

The reason I was uncomfortable providing this conversion as an implicit 
conversion is that I still think there's no single correct way to do it, as 
there's many strategies with multiple valid choices in the conversion:

1. Do you generally recursively convert elements, or only to the depth 
indicated by the number of dimensions on the target array type? Given a target 
of Object[], and a value of [ [ [1, 2], [3, 4] ],  [5, 6] ], do you also 
convert nested arrays or not? If yes, to what type? Object[]? List? (Proving 
all elements in every nested array can be represented as int[] would be an O(n) 
time additional operation). An obvious choice is not converting beyond the 
depth indicated by the target type dimension. By implication, automatic 
conversion to java.util.List will never have nested conversions, as they're one 
dimensional.
2. If you do nested conversions, can you guarantee that none of the arrays 
contains itself as an element? If not, you need to maintain a stack of 
currently converted elements to detect a cycle otherwise you end up with a 
StackOverflowError. If you can guarantee no array contains itself as an 
element, then maintaining a stack is a waste of time and memory. (Note: 
java.util.Arrays.deepHashCode, and deepEquals explicitly have undefined 
behavior for directly or indirecrly self-referencing arrays, either design 
choice is valid under different requirements).
3. When you detect a cycle, do you throw an error (like JSON.stringify() does), 
or do you short-circuit and use the already existing, half-converted object in 
its place?

True, none of these affect conversion of single-dimensional arrays, or 
conversion of arrays to target primitive types. But then we also don't like 
providing half-solutions, and a general complete solution needs to address the 
above questions too.

Then there are other strategy choices too, although we made a design decision 
on them already with Java.to(), but I'm including them for sake of completeness:

1. If you have repeated elements in an Object[] array, do you convert them to 
objects of different identities, or do you canonicalize them? (Java.to() 
doesn't canonicalize, as it might require another O(n) or more memory).
2. What do you do with undefined elements in a sparse array? (We fill out with 
JS equivalent of Undefined for the array element type.)
3. Back to automatic conversion to List vs. Object[]: is it okay to always have 
a List being live-backed by the JS array, but Object[] being a copy? Shouldn't 
the List then also always be a copy? (Java.to does a live-backed List and 
obviously must have a copy array).

Anyway, my general feeling was that it's better to give people an explicit API 
for array conversion than to hide a lot of implicit behavior in an automatic 
conversion, especially behavior that can have size and time requirements linear 
to the size of the input array.

Attila.

On Oct 8, 2013, at 9:02 PM, Tal Liron <[email protected]> wrote:

Until he follows up, here's how I *think* it works from looking at the code:

Argument conversion is handled by GuardingTypeConverterFactory instances. In 
Nashorn, these are NashornLinker and NashornPrimitiveLinker.

Custom converters are loaded with a ServiceLoader, and thus can be defined in 
META-INF/services/jdk.internal.dynalink.linker.GuardingDynamicLinker. If any of 
those instances also implement GuardingTypeConverterFactory, then they are 
added to the converter chain.

But I have to say that beyond that the code is hard to follow... conversion is actually 
handled with dynamic links so it's somewhat "meta" programming at that stage. 
I'm also unclear as to which converter is selected if multiple are available...

Any tips towards helping me find a solution to my issue would be appreciated. 
Indeed, a simple skeleton for a custom linker would be great!

On 10/08/2013 09:04 PM, Jim Laskey (Oracle) wrote:
You could always create your own Dynalink linker plug in to do the wrappers you 
need.  Sundar will follow up.


Reply via email to