Maybe :) But I don't want to prune this exploration just yet.
On 5/5/2022 6:00 PM, Dan Smith wrote:
On May 5, 2022, at 1:21 PM, Brian Goetz<brian.go...@oracle.com> wrote:
Let's write this out more explicitly. Suppose that T1 writes a non-null value
(d, t, true), and T2 writes null as (0, 0, false). Then it would be possible
to observe (0, 0, true), which means that we would be conceivably exposing the
zero value to the user, even though a B2 class might want to hide its zero.
So, suppose instead that we implemented writing a null as simply storing false
to the synthetic boolean field. Then, in the event of a race between reader
and writer, we could only see values for date and time that were previously put
there by some thread. This satisfies the OOTA (out of thin air) safety
requirements of the JMM.
(0, 0, false) is the initial value of a field/array, even if the VM implements a
"narrow write" strategy. That is, if I write (1, 1, true) at the moment of
reading from a fresh field, I could easily get (0, 0, true).
This is significant because the primary reason to declare a B2 rather than a B3
is to guarantee that the all-zeros value cannot be created. (A secondary
reason, valid but one I'm less sympathetic to, is that the all-zeros value is
okay but inconvenient, and it would be nice to reduce how much it pops up. A
third reason is reference-defaultness, important for migration if we don't
offer it in B3.)
This leads me to conclude that if you're declaring a non-atomic B2, you might
as well just declare a non-atomic B3.
Said differently: a B2 author usually wants to associate a cross-field
invariant with the null flag (zero-value fields iff null). But in declaring the
class non-atomic, they've sworn off cross-field invariants.
This was a useful discovery for me yesterday: that, in fact, nullability and
atomicity are closely related. There's a strong theoretical defense for the
idea that opting out of identity and supporting a non-null type (i.e., B3) are
prerequisites to non-atomic flattening.