But if you are saying that volatile.get() has the same effect as
synchronized{} enter, and the volatile.set() the effect of
synchronized{} exit (i.e. the cache is invalidated and flushed
respectively), then why is there a performance boost?
I assumed that volatile only marked that particular memory location to
be guaranteed to be updated, hence keeping most of the cache intact,
thus a huge boost over synchronized.

And the example (second link) is even more remarkable;

class VolatileExample {
  int x = 0;
  volatile boolean v = false;
  public void writer() {
    x = 42;
    v = true;
  }

  public void reader() {
    if (v == true) {
      //uses x - guaranteed to see 42.
    }
  }
}

Why would v=true be seen by another thread ever? Just because there is
x access somewhere "nearby"/"same class"... what?

I tested this and it doesn't work as advertised;

package org.apache.pivot;

public class Main
{
    public static void main( String[] args )
    {
        final Main m = new Main();
        Thread t1 = new Thread( new Runnable()
        {

            @Override
            public void run()
            {
                try
                {
                    Thread.sleep( 50 );
                }
                catch( InterruptedException e )
                {
                    e.printStackTrace();  //To change body of catch
statement use File | Settings | File Templates.
                }
                m.writer();
            }
        } );
        Thread t2 = new Thread( new Runnable()
        {
            @Override
            public void run()
            {
                int i = 0;
                try
                {
                    while( true )
                    {
                        m.reader();
                        i++;
                    }
                }
                catch( InterruptedException e )
                {
                    System.out.println( "Works: " + i );
                }
            }
        } );
        t1.start();
        t2.start();
    }

    private volatile int x = 0;
    private boolean v = false;

    private void writer()
    {
        x = 42;
        v = true;
    }

    private void reader()
        throws InterruptedException
    {
        if( v == true )
        {
//            if( x == 42 )
            throw new InterruptedException( "" + x );
        }
    }
}

This code doesn't stop.
BUT if I add the "if( x == 42 )" then it does stop. WHY ON EARTH??



I am getting more confused by the hours here... :-(


On Fri, Aug 26, 2011 at 12:45 PM, Stuart McCulloch <[email protected]> wrote:
> On 26 Aug 2011, at 05:13, Niclas Hedhman wrote:
>
>> On Fri, Aug 26, 2011 at 11:06 AM, Stuart McCulloch <[email protected]> wrote:
>>> On 26 Aug 2011, at 03:57, Niclas Hedhman wrote:
>>>
>>>> I think that is correct.
>>>> Assuming the 'next' is volatile, right?
>>>
>>> I don't believe 'next' has to be volatile, since the AtomicReference acts 
>>> as a memory barrier
>>
>> If not, the changed value in 'next' might not be visible to another
>> thread, just sitting in cache of a core/cpu.
>
> According to the AtomicReference javadoc:
>
>   
> http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/atomic/package-summary.html
>
>   "The memory effects for accesses and updates of atomics generally follow 
> the rules for volatiles:
>      * get has the memory effects of reading a volatile variable.
>      * set has the memory effects of writing (assigning) a volatile variable.
>      * weakCompareAndSet atomically reads and conditionally writes a 
> variable, is ordered with respect to other memory operations on that 
> variable, but otherwise acts as an ordinary non-volatile memory operation.
>      * compareAndSet and all other read-and-update operations such as 
> getAndIncrement have the memory effects of both reading and writing volatile 
> variables."
>
> So any thread getting an instance from the pool makes at least a volatile 
> read via the atomic reference, and any thread returning an instance to the 
> pool makes at least a volatile write via the atomic reference. This sets up a 
> "happens-before" relationship, which will cause any pending changes in the 
> cache to be flushed to main memory:
>
>   http://jeremymanson.blogspot.com/2008/11/what-volatile-means-in-java.html
>   http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile
>
> Or in other words you don't have to make every shared variable volatile when 
> doing inter-thread communication, you just need to have the right 
> "happens-before" relationship on at least one shared variable. (This is also 
> why double-checked locking works correctly in Java5+ whereas it didn't in 
> earlier JVMs).
>
> But if you want to be really, really sure then marking the 'next' field as 
> volatile won't cause problems (may affect performance, but probably not by 
> much)
>
>>
>> Cheers
>> --
>> Niclas Hedhman, Software Developer
>> http://www.qi4j.org - New Energy for Java
>>
>> I live here; http://tinyurl.com/3xugrbk
>> I work here; http://tinyurl.com/24svnvk
>> I relax here; http://tinyurl.com/2cgsug
>>
>> _______________________________________________
>> qi4j-dev mailing list
>> [email protected]
>> http://lists.ops4j.org/mailman/listinfo/qi4j-dev
>
>
> _______________________________________________
> qi4j-dev mailing list
> [email protected]
> http://lists.ops4j.org/mailman/listinfo/qi4j-dev
>



-- 
Niclas Hedhman, Software Developer
http://www.qi4j.org - New Energy for Java

I live here; http://tinyurl.com/3xugrbk
I work here; http://tinyurl.com/24svnvk
I relax here; http://tinyurl.com/2cgsug

_______________________________________________
qi4j-dev mailing list
[email protected]
http://lists.ops4j.org/mailman/listinfo/qi4j-dev

Reply via email to