> On Feb 26, 2019, at 3:09 PM, Hefty, Sean <[email protected]> wrote:
> 
>>> There is no guarantee that NIC/network based atomics will be coherent with
>> CPU based atomics, or that they will be coherent between NICs, or the final
>> result will even be atomic.  [...]
>> 
>> Would I be correct in reading that last clause as “or [that] the [visibility
>> of the] final result will even be atomic”, meaning that visibility of one
>> subpart (byte, for example) of the result should not be taken as evidence of
>> visibility of the whole?  Or in other words, that the paragraph about CPU
>> visibility provides the only guarantees of visibility.  ...
> 
> I wasn't trying to write spec language. :)

Nor are man pages the place for spec language! :-)    But someday a spec would 
be nice, once the interface definition is ripe for it.  Personally, I’m quite 
fond of specs.


> The point I was making above was related to data correctness.  If 2 or more 
> 'actors' are both performing atomic operations on the same target memory, the 
> result is undefined.  An actor can be a NIC or CPU.
> 
> E.g. An atomic through NIC A adds 1 to each element.  An atomic through NIC B 
> subtracts 1 to each element.  The results may end up with each element 
> unchanged, incremented by 1, or decremented by 1.  And the change may not be 
> the same for each element -- some may be +1, some -1, some unchanged.

Okay, I can work with that.  The model I have in my head is that the operations 
on a given location are divided into epochs, where within each epoch all the 
atomic ops are done by just one actor.  To switch from one epoch to another, 
and thus from one actor to another, the completions for all the ops in the old 
epoch have to have been seen before the first op in the new epoch is initiated. 
 (Where “before” has all the usual caveats and limitations with respect to 
parallel programming.)  No prob.

thanks again,
greg

_______________________________________________
ofiwg mailing list
[email protected]
https://lists.openfabrics.org/mailman/listinfo/ofiwg

Reply via email to