There has been, not surprisingly, a lot of misunderstanding about atomicity, non-atomicity, and tearing.  In particular, various syntactic expressions of non-atomicity (e.g., a `non-atomic` class keyword) tend to confuse users into thinking that non-atomic access is somehow a *feature*, rather than providing more precise control over the breakage modes of already-broken programs (to steer optimizations for non-broken programs.)

I've written the following as an attempt to help people understand the role of atomicity and tearing in the model; comments are welcome (though let's steer clear of trying to paint the bikeshed in this thread.)



# Understanding non-atomicity and tearing

Almost since the beginning of Project Valhalla, the design has included some
form of "non-atomicity" or "tearability".  Addressing this in the programming model is necessary if we are to achieve the heap flattening that Valhalla wants
to deliver, but unfortunately this aspect of the feature set is frequently
misunderstood.

Whether non-atomicity is expressed syntactically as a class modifier,
constructor modifier, supertype, or some other means, the concept is the same: a class indicates its willingness to give up certain guarantees in order to gain
additional heap flattening.

Unlike most language features, which express either the presence or absence of things that are at some level "normal" (e.g., the presence or absence of `final` means a class either can be assigned to, or cannot), non-atomicity is different; it is about what the possible observable effects are when an instance of this
class is accessed with a data race.  Programs with data races are _already
broken_, so rather than opting into or out of a feature, non-atomicity is
expressing a choice between "breakage mode A" and "breakage mode B".

> Non-atomicity is best thought of not as a _feature_ or the absence thereof, > but an alternate choice about the runtime-visible behavior of _already-broken
> programs_.

## Background: flattening and tearing in built-in primitives

Flattening and non-atomicity have been with us since Java 1.0. The eight
built-in primitive types are routinely flattened into object layouts and arrays. This "direct" storage results from several design choices made about primitives:
primitive types are non-nullable, and their zero values represent explicitly
"good" default values and therefore even "uninitialized" primitives have useful
initial values.

Further, the two 64-bit primitive types (`long` and `double`) are explicitly
permitted to _tear_ when accessed via a data race, as if they are read and
written using two 32-bit loads and stores.  When a mutable `long` or `double` is
read with a data race, it may be seen to have the high-order 32 bits of one
previous write and the low-order 32 bits of another.  This is because at the
time, atomic 64-bit loads and stores were prohibitively expensive on most
processors, so we faced a tradeoff: punish well-behaved programs with
poorly-performing numerics, or allow already-broken programs (concurrent
programs with insufficient synchronization) to be seen to produce broken numeric
results.

In most similar situations, Java would have come down on the side of
predictability and correctness. However, numeric performance was important
enough, and data races enough of an "all bets are off" sort of thing, that this set of decisions was a pragmatic compromise.  While tearing sounds scary, it is
important to reiterate that tearing only happens when programs _are already
broken_, and that even if we outlawed tearing, _something else bad_ would still
happen.

Valhalla takes these implicit characteristics of primitives and formalizes them to explicit characteristics of value classes in the programming model, enabling
user-defined classes to gain the runtime characteristics of primitives.

## Data races and consistency

A _data race_ is when a nonfinal heap variable (array element or nonfinal field) is accessed by multiple threads, at least once access is a write, and the reads and writes of that variable are not ordered by _happens-before_ (see JLS Ch17 or
_Java Concurrency in Practice_ Ch16.)  In the presence of a data race, the
reading thread may see a stale (out of date) value for that variable.

"Stale" doesn't sound so bad, but in a program with multiple variables, the
error states can multiply with the number and configuration of mutable
variables.  Suppose we have two `Range` classes:

```
class MutableRange {
    int low, high;

    // obvious constructor, accessor, and updater methods
    // constructor and updater methods validate invariant low <= high
}

class ImmutableRange {
    final int low, high;

    // obvious constructor and accessors, constructor validates invariant
}

final static MutableRange mr = new MutableRange(0, 10);
static ImmutableRange ir = new ImmutableRange(0, 10);
```

For `mr`, we have a final reference to a mutable point, so there are two mutable variables here (`mr.low` and `mr.high`.)  We update our range value through a
method that mutates `low` and/or `high`.  By contrast, `ir` is a mutable
reference to an immutable object, with one mutable variable (`ir`), and we
update our range value by creating a new `ImmutableRange` and mutating the
reference `ir` to refer to it.

More things can go wrong when we racily access the mutable range, because there are more mutable variables.  If Thread A writes `low` and then writes `high`,
and Thread B reads `low` and `high`; under racy access B could see stale or
up-to-date values for either field, and even if it sees an up-to-date value for
`high` (the one written later), that still doesn't mean it would see an
up-to-date value for `low`.  This means that in addition to seeing out-of-date values for either or both, we could observe an instance of `MutableRange` to not
obey the invariant that is checked by constructors and setters.

Suppose instead we racily access the immutable range.  At least there are fewer possible error states; a reader might see a stale _reference_ to the immutable
object.  Access to `low` and `high` through that stale reference would see
out-of-date values, but those out-of-date values would at least be consistent
with each other (because of the initialization safety guarantees of final
fields.)

When primitives other than `long` or `double` are accessed with a data race, the failure modes are like that of `ImmutableRange`; when we accept that `long` or `double` could tear under race, we are additionally accepting the failure modes
of `MutableRange` under race for those types as well, as if the high- and
low-order 32-bit quantities were separate fields (in exchange for better
performance).  Accepting non-atomicity of large primitives merely _increases_
the number of observable failure modes for broken programs; even with atomic
access, such programs are still broken and can produce observably incorrect
results.

Note that a `long` or `double` will never tear if it is `final`, `volatile`,
only accessed from a single thread, or accessed concurrently with appropriate sychronization.  Tearing only happens in the presence of concurrent access to
mutable variables with insufficient synchronization.

## Non-atomicity and value types

Hardware has improved significantly since Java 1.0, so the specific tradeoff
faced by the Java designers regarding `long` and `double` is no longer an issue,
as most processors have fast atomic 64-bit load and store operations today.
However, Valhalla will still face the same problem, as value types can easily
exceed 64 bits in size, and whatever the limit on efficient atomic loads and
stores is, we can easily write value types that will exceed that size.  This
leaves us with three choices:

 - Never allow tearing of values, as with `int`;
 - Always allow tearing of values under race, as with `long`;
 - Allow tearing of values under race based on some sort of opt-in or opt-out.

Note that tearing is not anything anyone ever _wants_, but it is sometimes an
acceptable tradeoff to get more flattening.  It was a sensible tradeoff for
`long` and `double` in 1995, and will continue to be a sensible tradeoff for at
least some value types going forward.

The first choice -- values are always atomic -- offers the most safety, but
means we must forgo one of the primary goals of Valhalla for all but the
smallest value types.

This leaves us with "values are always like `long`", or "values can opt into / out of being like `long`."  Types like `long` have the interesting property that
all bit patterns correspond to valid values; there are no representational
invariants for `long`.  On the other hand, values are classes, and can have
representation invariants that are enforced by the constructor. Having
representational invariants for immutable classes be seen to not hold would be a significant and novel new failure mode, and so we took the safe route, requiring class authors to make the tradeoff between flattening and failure modes under
race.

Just as with `long` and `double`, a value will never tear if the variable that holds the value is `final`, `volatile`, only accessed from a single thread, or accessed concurrently with appropriate sychronization.  Tearing only happens in
the presence of concurrent access to mutable variables with insufficient
synchronization.

Further, tearing under race will only happen for non-nullable variables of value
types that support default instances.

What remains is to offer sensible advice to authors of value classes as to when to opt into non-atomicity.  If a class has any cross-field invariants (such as `ImmutableRange`), atomicity should definitely be retained.  In the remaining
cases, class authors (like the creators of `long` or `double`) must make a
tradeoff about the perceived value of atomicity vs flattening for the expected
range of users of the class.

Reply via email to