Hi David,

I was thinking about the fact that java.lang.invoke code is leaking into java.lang.Class. Perhaps, If you don't mind rewriting the code, a better code structure would be, if j.l.Class changes only consisted of adding a simple:

+ // A reference to canonicalizing cache of java.lang.invoke.MemberName(s)
+ // for members declared by class represented by this Class object
+ private transient volatile Object memberNameData;

...and nothing else. All the logic could live in MemberName itself (together with Unsafe machinery for accessing/cas-ing Class.memberNameData).

Now to an idea about implementation. Since VM code is not doing any binary-search and only linearly scans the array when it has to update MemberNames, the code could be changed to scan a linked-list of MemberName(s) instead. You could add a field to MemberName:

class MemberName {
...
// next MemberName in chain of interned MemberNames for particular declaring class
    private MemberName next;


Have a volatile field in MemberNameData (or ClassData - whatever you call it):

class MemberNameData {
...
    // a chain of interned MemberName(s) for particular declaring class
    // accessed by VM when it has to modify them in-place
    private volatile MemberName memberNames;

MemberName add(Class<?> klass, int index, MemberName mn, int redefined_count) {
        mn.next = memberNames;
memberNames = mn;
if (jla.getClassRedefinedCount(klass) == redefined_count) { // no changes to class
            ...
            ... code to update array of sorted MemberName(s) with new 'mn'
            ...
            return mn;
        }
        // lost race, undo insertion
memberNames = mn.next;
        return null;
    }


This way all the worries about ordering of writes into array and/or size are gone. The array is still used to quickly search for an element, but VM only scans the linked-list.

What do you think of this?

Regards, Peter


On 11/03/2014 05:36 PM, David Chase wrote:
My main concern is that the compiler is inhibited from any peculiar code 
motion; I assume that taking a safe point has a bit of barrier built into it 
anyway, especially given that the worry case is safepoint + JVMTI.

Given the worry, what’s the best way to spell “barrier” here?
I could synchronize on classData (it would be a recursive lock in the current 
version of the code)
   synchronized (this) { size++; }
or I could synchronize on elementData (no longer used for a lock elsewhere, so 
always uncontended)
   synchronized (elementData) { size++; }
or is there some Unsafe thing that would be better?
You're worried that writes moving array elements up for one slot would bubble 
up before write of size = size+1, right? If that happens, VM could skip an 
existing (last) element and not update it.
exactly, with the restriction that it would be compiler-induced bubbling, not 
architectural.
Which is both better, and worse — I don’t have to worry about crazy hardware, 
but the rules
of java/jvm "memory model" are not as thoroughly defined as those for java 
itself.

I added a method to Atomic (.storeFence() ).  New webrev to come after I 
rebuild and retest.

Thanks much,

David

It seems that Unsafe.storeFence() between size++ and moving of elements could 
do, as the javadoc for it says:

    /**
     * Ensures lack of reordering of stores before the fence
     * with loads or stores after the fence.
     * @since 1.8
     */
    public native void storeFence();

Reply via email to