Gil, thanks for your response. It is very helpful. 

In your specific example above, there is actually no ordering question, 
because your writeTask() operations doesn't actually observe the state 
changed by connection.configueBlocking(false)


I agree that my question wasn't correct. There is not 'ordering'. I meant 
visibility. 


 Without the use of synchronized in isBlocking(), the use of synchronized 
in configureBlocking() wouldn't make a difference.

Yes, semi-synchronized doesn't work. So, I conclude that without a 
synchronization the result of `blocking = false` could be invisible for 
writeTask, am I right?


As your question about the possibility of "skipping" some write operations.


By skipping I meant 'being invisible for observers'. For example, if one 
thread t1 read any not-volatile-integer x then it is possible that t1 see 
always the same value of x (though there is another thread t2 that modifies 
x). 


1. It is interesting for me what about a such situation:

   while(true) {
        SocketChannel connection = serverSocketChannel.accept();
        connection.configueBlocking(false);
        Unsafe.storeFence();
        executor.execute(() -> writeTask(connection));
    }
    void writeTask(SocketChannel s){
        (***)
        any_static_global_field = s.isBlocking();
    }

For my eye it should work but I have doubts. What does it mean storeFence? 
Please flush it to the memory immediately! So, it will be visible before 
starting the executor thread. But, it seems that, here, load fence is not 
necessary (***). Why? The blocking field must be read from memory (there is 
no possibility that it is cached, because it is read the first time by the 
executor thread). When it comes to CPU cache it may be cached but cache is 
coherent = no problem). Moreover, there is no need to ensure ordering here. 
So, loadFence is not necessary. Yes? 

2. 
volatile int foo;
...
foo = 1;
foo = 2;
foo = 3;



It is very interesting. So, after JITed on x86 it can look like:

mov &foo, 1
sfence
mov &foo, 2
sfence
mov &foo, 3
sfence



Are you sure that CPU can execute that as:
mov &foo, 3
sfence


?

I know that: 

mov &foo, 1
mov &foo, 2
mov &foo, 3 



x86-CPU can optimizied it legally. 
















W dniu piątek, 9 marca 2018 23:20:37 UTC+1 użytkownik John Hening napisał:
>
>
>     executor = Executors.newFixedThreadPool(16);
>     while(true) {
>         SocketChannel connection = serverSocketChannel.accept();
>         connection.configueBlocking(false);
>         executor.execute(() -> writeTask(connection)); 
>     }
>     void writeTask(SocketChannel s){
>         s.isBlocking();
>     }
>
>     public final SelectableChannel configureBlocking(boolean block) throws 
> IOException
>     {
>         synchronized (regLock) {
>             ...
>             blocking = block;
>         }
>         return this;
>     }
>
>
>
> We see the following situation: the main thread is setting 
> connection.configueBlocking(false)
>
> and another thread (launched by executor) is reading that. So, it looks 
> like a datarace.
>
> My question is:
>
> 1. Here 
> configureBlocking
>
> is synchronized so it behaves as memory barrier. It means that code is ok- 
> even if reading/writing to 
> blocking
>
> field is not synchronized- reading/writing boolean is atomic.
>
> 2. What if 
> configureBlocking
>
> wouldn't be synchronized? What in a such situation? I think that it would 
> be necessary to emit a memory barrier because it is theoretically possible 
> that setting blocking field could be reordered. 
>
> Am I right?
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to