I'm reading the great article from 
https://shipilev.net/blog/2014/nanotrusting-nanotime/ (thanks Aleksey! :)) 
and I am not sure whether I understand correctly that. 

Firstly, it is compared performance of plain and volatile writes:


Benchmark Mode Samples Mean Mean error Units
o.s.VolatileWriteSucks.incrPlain avgt 250 3.589 0.025 ns/op
o.s.VolatileWriteSucks.incrVolatile avgt 250 15.219 0.114 ns/op


and then it is written that: 

"In real code, the heavy-weight operations are mixed with relatively low-weight 
ops, which amortize the costs."

And my question is: What does it mean to amortize costs exactly? I explain it 
myself that amortization is caused by out of order execution of CPU, yes? 
So even if volatile write takes much more time than plain write, it isn't so 
painful because CPU executes other instruction out of order (if it can). 

What do you think?

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to