Recall that some time ago, we were discussing some different directions for how to handle annotations on record components.

Approach A: Record components are a new kind of annotation target location; if an annotation is meta-annotated with this target kind, it can be applied to record components.  Expose reflection over annotations on record components as with other features.

Approach B: Annotations on record components are merely “pushed down” to the corresponding JLS-mandated API elements (constructor parameters, accessor methods, fields), according to the allowed target kinds of the annotation (if the annotation is only valid on fields, it is only pushed down to fields.).

Approach B+: Like B, except that we continue to reify the provenance of the annotations, and expose them through reflection as annotations on the record component _in addition to_ annotations on the mandated API elements.

In an alternate universe where we had done records first, and were now adding annotations, we’d surely pick A.  However, in the current universe, picking A would put us in an adoption bind; we have to wait for specific annotations to acquire knowledge of the new target kinds (through the @Target meta-annotation), and for frameworks to be aware of annotations on record components, before we can migrate classes dependent on those annotations/frameworks to be records.  Further, library authors suffer a familiar problem: if @Foo is meta-annotated with a target kind of RECORD_COMPONENT, then that means it must have been compiled against a Java 14+ JDK, which means that the resulting classes are dependent on JDK 14+, unless they use something like MR Jars to have two versions in one JAR.  This would further impede adoption.

For guidance in our A/B choice, we can look to enums. Enum constants are surely a first-class language element, and can be annotated, but they do not have their own annotation target kind; instead, the compiler pushes down the annotations onto the fields that carry the enum constants.  While this might be an uneasy dependence on the translation strategy, in fact this translation strategy is mandated (because we want migrating between a class with static constant fields and an enum to be a binary-compatible migration.).

Records are in a similar boat as enums; while there is a translation strategy going on here, the elements of it are mandated by the language specification.  So I think the trick that enums use is a reasonable one to carry forward to records, allowing us to seriously consider B/B+.   (Strategy A also has a lot of accidental detail; class file attributes for various kinds of options and bookkeeping to manage exactly what is being annotated, reflection API surface, etc.).

The following type-checking strategy applies to B and B+:

 - A record component may be annotated by a declaration annotation with no target kind meta annotation, or whose target kind includes one or more of PARAMETER, FIELD, or METHOD
 - The type of a record component may be annotated by a type annotation

Strategy B then entails pushing down annotations through tree manipulation to the right places.  For PARAMETER annotations, they are pushed down to the parameters of the implicit constructor; for FIELD annotations, to the fields; for METHOD annotations, to the accessor.  And for type annotations, to the corresponding type use in constructor parameters, field declarations, and accessor methods.  (And if the annotation is applicable to more than one of these, it is pushed down to all applicable targets.)

But wait!  What if the author also explicitly declares, say, the accessor method?

    record R(int a) {
        int a() { return a; }
    }

No problem, we can still push the annotation down, and there is precedent for annotations being “inherited” in this way.

But wait!  What if the author explicitly declares the same annotation, but with conflicting values?

    record R(@Foo(1) int a) {
        @Foo(2) int a() { return a; }
   }

We can still push down @Foo(1), and then look to see if @Foo is a repeating annotation.  If it is, great; if not, then a() has two @Foo annotations, which results in a compilation error.  So we always push down, and then enforce arity rules.

By pushing annotations down in this manner, existing reflection can pick up the annotations on the various class members with no additional work or reflection API surface.  Are we done?

We might be done, or we might want to do more (strategy B+).  In B+, we _additionally_ reify which annotations were present on the component, and (possibly) expose additional reflection API surface to query annotations on record components. Why would we want to do this?  Well, one reason that occurs to me is that we’ve been holding the move of “abstract records” and records extending abstract records in our back pocket.  In this case, we might wish to copy annotations down from a record component in a superclass to the corresponding pseudo-component in the subclass, for example.  But, I’m not particularly compelled by this — I think the strategy we took for enums is mostly good enough.  So I’m voting for pure B.

Reply via email to